ACG Research

ACG Research
We focus on the Why before the What
Showing posts with label Infinera. Show all posts
Showing posts with label Infinera. Show all posts

Monday, April 18, 2016

PAM-4 or Coherent DWDM for DCI?


At the March 2016 OFC conference, Inphi announced its delivery of a 100G, QSFP28, PAM-4, pluggable transceiver with 80km reach. PAM technology has been utilized for 100G transmissions (Inphi is a specialist in this area) before but at much shorter distances. Pulse-amplitude modulation (PAM) is an analog transmission scheme similar to NRZ but with multi-level signaling, with PAM-4 utilizing four levels to signal one of four possible symbols (2 bits per symbol). During the announcement, Microsoft also publicly announced that it will begin sourcing the pluggable PAM-4 technology from Inphi for interconnection of its regional, metro-distributed data centers, which by definition are within 70km of each other. Coherent technology will continue to be used elsewhere. The metro-distributed data center deployment model builds and interconnects a number of smaller data centers within a metropolitan area instead of deploying a single hyperscale data center in the region. Microsoft also divulged that it was their intention to turn up all 40, 100G wavelengths at one time (4Tb/s with each carrier occupying 100GHz channel spacing) on a fiber pair, utilizing all available colors in the fixed-wavelength portfolio. 

Some at the conference reacted to the Inphi/Microsoft announcement by declaring the obsolescence of existing optical DCI/coherent DWDM solutions. Although the Inphi/Microsoft announcement is exciting news, ACG thinks the PAM-4 technology is far more complementary to existing coherent DWDM solutions than competitive for multiple reasons. 


Figure 1. Optical Reach for 100G Technologies 

Reach. The PAM-4 solution covers a portion of the optical reach needed to interconnect data centers. Below 10km, IEEE 802.3ba 100G pluggable optics are readily available with 100GBASE-LR4 supporting 10km reach in a QSFP28 package for cost-effective point-to-point connectivity. The 100GBASE-ER4 specification for 40km reach has been more challenging for optics suppliers to deliver and remains either in larger packages (example, CFP, CFP2) or in nonstandard formats, meaning non-interoperable across vendors. So where does the PAM-4 technology fit? In general, its initial fit appears to be in the <40km range as an alternative to existing, suboptimal pluggable solutions. We believe there is limited overlap with coherent DWDM solutions in this range. The solution also plays in the 40–80km range as an alternative to optical DCI/coherent DWDM solutions for some deployment scenarios. 

So, based solely upon reach, a logical question is how much of the optical DCI/coherent DWDM market is covered by 40–80km? ACG Research recently completed a worldwide survey of data center service providers, including network service providers, cloud service providers, Internet content providers and Internet eXchange providers. This research will be available in a published report later this month (April). One of the questions we asked the service providers was the proportion of optical reach needed to cover their data center interconnections today and in 2019. What we found is that service providers on average believe that 30–80km optical reach is needed for approximately 30% of their data center interconnections. The results indicate a modest increase between today and 2019. Based upon this preliminary research, we have a sense of the addressable optical DCI market for this technology. However, we also believe that service providers will consider at least three other factors in making their DCI deployment decisions.


Figure 2. Data Center Interconnect Optical Reach 

Operations. Every data center deployment is not like Microsoft’s plan for metro-distributed data centers, which is to turn up all 4Tb/s of connectivity in a point-to-point fashion on day one of data center activation. By deploying all 40 wavelengths at once, Microsoft could reduce the incremental cost per wavelength of deploying dispersion compensation on the fiber, which is required for PAM but not for coherent DWDM solutions. Dispersion compensation costs include both the capital equipment as well as the operational costs associated with installing and tuning the compensators. Microsoft also avoids the operational complexity of deploying fixed wavelength pluggable optics incrementally, where inventory and on-site resources are required every time a change or a wavelength addition is needed. 

Other service providers that have existing metro optical networks may not want to deploy in this manner. They may not want the added complexity of dealing with dispersion compensation for PAM deployments. Some may want to utilize existing metro optical infrastructure and/or deploy in a mesh architecture. Still other service providers may not have the same visibility as Microsoft with regard to their data center connectivity needs. They may need to be more agile and utilize a pay-as-you-go/pay-as-you-grow deployment model where they add interconnection capacity over time and in alignment with their data center compute/storage capacity and revenue generation. An incremental deployment model is just more operationally complex with fixed-wavelength pluggable optics. 

Fiber Scarcity. When fiber is scarce or expensive, fiber optic transmission efficiency (bits per Hz) increases in importance. The PAM-4 solution delivers an efficiency ratio of 1 with 100Gb/s transmission occupying 100GHz channel spacing. 16-QAM coherent DWDM modulation offers 200Gb/s in 50GHz channels or an efficiency ratio of 4. Recent flexible grid implementations have an even greater efficiency ratio approaching 7. If more than 4Tb/s of connectivity is needed and incremental fiber is scarce or expensive, service providers may need to utilize the more efficient coherent DWDM system to squeeze more bandwidth through their limited fiber resources.

Programmability. Fixed-wavelength pluggable optics do not advance the broader drive toward a programmable, agile, SDN enabled optical underlay. SDN and NFV are changing all aspects of the ICT industry, including optical solutions. Service providers are looking to utilize intelligence, automation and programmability to reduce operational costs and ensure that network resources adapt to changing business and networking conditions across protocol layers, including optics and IP. Many demonstrations at OFC utilized SDN control and service automation combined with a programmable optical layer to showcase network efficiency and adaptability. The ONS 2016 conference had similar demonstrations with ONOS and ODL controllers programming in near real-time optical and IP networking infrastructure. 


Figure 3. Example of a Mixed Technology DCI Deployment 

The Inphi PAM-4, QSFP28 solution is an exciting achievement and addresses a very real need in the sub-80km 100G market. We believe the solution is actually far more complementary than competitive to existing optical DCI/coherent DWDM solutions. Most service providers will utilize an all-of-the-above approach to their 100G DCI deployments just as they did before with dark fiber, IEEE pluggables and coherent DWDM options. PAM-4 meets the needs of data center operators, such as Microsoft, that intend to turn up 4Tb/s of transmission capacity in a point-to-point fashion between data centers in a ~70km metro-distributed network. However, if a provider needs longer reach or more than 4Tb/s per fiber pair or an incremental growth operational model or if a service provider is looking to advance its programmable, SDN enabled network, then a tunable, coherent DWDM solution is a better fit. PAM-4 or coherent DWDM for data center interconnections? Yes!


Click for more information about Tim Doiron and his recent articles.

     Tim Doiron
     www.acgcc.com

Friday, April 8, 2016

Infinera Delivers the Multi-Terabit Infinite Capacity Engine

Infinera revolutionized optical integration with the introduction of its industry leading 100G Photonic Integrated Circuit (PIC) in 2005.

In 2011 the company followed with the introduction of a 500G PIC and coherent digital signal processing (DSP) technology.

At the OFC Conference in March 2016, Infinera once again pushed the limits of optical integration with the debut of its multi-terabit Infinite Capacity Engine.

The Infinite Capacity Engine is a family of next-generation optical subsystems consisting of fourth-generation photonic integration with advanced coherent signal processing, software defined networking-enabled sliceable photonics architecture and Layer 1 encryption.


For more information about ACG's market impact service, contact sales@acgcc.com.

     Tim Doiron
     www.acgcc.com

Friday, August 28, 2015

2Q Vendor Financial Index: Highest Number in Low-Risk Category

Strong revenue outlook, high operating margins and other factors put Adtran, Brocade, Cisco, Infinera, and Juniper into low-risk category
ACG Research has released its 2Q 2015 Vendor Financial Index report, which delivers independent information about the sustainability of a vendor or company to help providers assess the risk of selecting the right vendor to meet their business requirements and to ascertain a risk level on the stability of the vendor regardless of technology innovations.
Low-risk vendors for the quarter are Adtran, Brocade, Cisco, Infinera and Juniper. Characteristics of low-risk vendors include strong revenue outlook, high operating margins because of sales, solid gross margin and expense discipline, low debt dependency, and high receivable efficiency ratio. Medium risk were Alcatel-Lucent, Ericsson and Fujitsu.
Adtran has the highest equity to debt ratio (2.32) in the industry and financing its assets with more shareholders’ equity than debt. The company’s financial performance is predicted to improve in the second half of 2015 as a result of higher carrier expenditure in U.S. However, weakness in Europe will continue to impact Adtran’s revenue. Brocade’s operating margin is 20.9 percent, one of the highest in the industry. However, the company’s operating income decreased by 18 percent QoQ. Brocade’s growing data center presence, positioning as storage networking experts and innovation in software-enabled networking, will be the focus in 3Q15 as well. Cisco’s a very high operating margins because of sales, solid gross margin, improved productivity and expense discipline led to its operating income increased 4.3 percent YoY. Application Centric Infrastructure and APIC are predicted to be the cornerstone of the Cisco’s next generation of networking architectures. Infinera's operating margin (8.0 percent) is high compared to industry average, driven by cost decline because of vertically integrated model and improved services profitability. Revenue for 3Q15 is estimated at $215 M, a 30 percent YoY growth and will be mainly driven by continued acceptance of DTN-X. Juniper’s revenue was up 14.5 percent QoQ, mainly driven by better demand from its cloud and cable service providers. The company’s services revenue increased 7.4 percent on YoY. Juniper’s partnership with VMware will enable highly automated cloud data center solutions for both service provider and mission-critical enterprise network.
The same as last quarter, Ciena, Cyan and ZTE remain in the high-risk category. ZTE, healthy but fluctuating net cash ratio, has had Difficulty establishing presence in North America markets. The company will focus on three key markets in the second half of 2015: carriers, government and corporate sectors and consumers. Cyan has the lowest operating margin in the industry. The company suffers from lack of customer diversification and revenue is concentration in one company, Windstream, which represented 52 percent of its revenue; two other companies accounted for more than 10 percent revenue each. Ciena has very low net cash ratio at $(6 M) and has substantial segment of revenue continues to come from sales to a small number of service providers. However, higher spending on optical upgrades and increased international orders will positively impact revenue.
“This is the highest number of vendors in the low-risk category we have seen since we started tracking vendor financial ratios and launched this report,” says Ray Mota, CEO, ACG Research. “Network vendors are taking operational efficiency and sustainability more seriously and the numbers show that they are running more efficient companies.”
For more information about ACG Research’s Vendor Financial Index service or other syndicated and consulting services, contact sales@acgcc.com.
rmota@acgcc.com
www.acgcc.com

Monday, March 16, 2015

New Entrants into the DCI Small Form Factor Market

Two equipment titans Coriant and Alcatel-Lucent entered the Data Center Interconnect (DCI) small form factor market with targeted packet optical networking products. Coriant added to its 7100 family of products with the 7100 Pico™ Packet Optical Transport Platform and Alcatel-Lucent added to its 1830 Photonic Service Switch (PSS) family of cloud optimized metro products with its 1830 PSS-4, 8, 16 optical transport platforms. Both of these devices integrate cleanly into their respective portfolios and are Software Defined Network (SDN) enabled for dynamic service instantiation.

These products are significant because they validate the need for higher performance in this growing sector of the packet optical market. Bell Labs forecasts an increase of metro traffic by 560 percent by 2017. By 2019 there will be 60 percent more data centers in the world’s metro areas and DCI volumes will increase 400 percent. Why? With cloud-based services, the industry has recognized the need for data center interconnect (DCI). Initially, service providers offering XaaS solutions were connecting customers’ data centers to service providers’ data centers.  New requirements for DCI have grown out of the operators’ needs to deploy very high-capacity, high-speed, low-latency, efficient transport between their own data center sites. In addition, rich data types such as video, multimedia mobile backhaul, cloud and data center traffic are also forcing the need for more intelligent programmability and automation in management of these traffic patterns. However, because of the size and power constraints of the metro data centers to date, platforms need to fit strategically into smaller Point of Demarcation (POD) locations with low power and high cooling requirements. This is where the DCI small form factor market emerges.

Some key specifications and product comparisons for DCI Small FF at-a-glance:

DCI Small FF Requirements
Coriant 7100 Pico
ALU 1830 PSS –4, 8, 16
4 RU Chassis or less
2 RU
PSS-4=(2 RU), PSS-8(3 RU), 16(8 RU)
DWDM w/ Tb/s fiber capacity
88 DWDM @ 10 & 100G
8 CWDM, 32 DWDM (400G – 1.6 Tb/s)
Eth, OTN, SONET
Eth, OTN, SONET
Eth, OTN, SONET
SAN (FICON, etc.)
SAN interfaces
SAN interfaces
Video (DVB, SDI, etc.)
Video interfaces
Video interfaces
40 - 100G+ ntwk interface
40G
10G, 100G, 200G
10GE – 100GE modular I/O
1, 10 , 100 GE (176 GE max)
10 , 40, 100 GE (w/112SDX11 card)
Pwr (AC or DC)
AC/DC (110/220VAC / -48VDC)
AC/DC (110/220VAC / -48VDC)
Open API/SDN mgt
Transend
SDN Enabled

ACG sees a bifurcation of the DCI market between small and multislot form factor devices. The total high-speed DCI market was approximately $400 million in 2013 and is forecasted to grow to $4 billion by 2019. Growth for the DCI small form factor is predicted to be $3 billion by 2019, 97.3 percent CAGR 2014–2019. Growth for the DCI multislot is predicted to be $1 billion by 2019, 27.1 percent CAGR 2014–2019. This market segment is growing because of ADVA, BTI, Ciena, Cisco, Cyan, ECI Telecom, Ekinops, Fujitsu, Huawei, Infinera and ZTE. Who will command the market share? Time will tell but in the meantime ACG is tracking the progress of this exciting market in its new DCI Optical Networking Market Worldwide syndication.


Contact sales@acgcc.com to find out more information or schedule a meeting with Dennis Ward and Paul Parker-Johnson to discuss this research.


Thursday, March 12, 2015

Infinera Puts Agility into Pacnet's Optical Transport Services with Its Open Transport Switch

Infinera’s announcement yesterday that Pacnet has deployed its Open Transport Switch (OTS) embedded intelligence layer into its Pacnet Enabled Network (PEN) for trans-Pacific and intra-Asian optical network services brings an innovative design into production in the fast-moving market for dynamically controlled network services.

Infinera’s OTS brings an innovative design to the table as operators’ efforts to embrace SDN move ahead. Most SDN solutions include an abstraction, or ‘adapter’ layer of software to translate consistently described templates (say, secure VPN or elastic content delivery) into semantics an underlying platform can process. This approach provides agility at the service creation and management level—in an SDN controller tier—and puts the burden of integration with the ‘not SDN-enabled’ infrastructure on the controller.

Infinera has taken an interesting tack in this evolution. Recognizing that operators have a wide range of control environments in play as they move ahead on SDN, OTS puts the ‘agility inside’ the infrastructure and allows it to support dynamic network services in a variety of northbound environments. While its first ‘connection path’ for SDN in Pacnet’s PEN is REST-based, there is no requirement for OTS to be REST-limited in all future scenarios. Underlying data models could be adapted to alternative protocol environments such as NETCONF if an operator requires that model to be used. In this way Infinera enables its DTN-X family to support dynamic controls in a variety of service control environments.

Putting ‘agility inside’ adds a refreshing level of flexibility for designers to take advantage of as they plot their course toward more fluid SDN world. OTS does not take away the value of control plane streamlining or innovations in management applications at higher layers. It simply creates the opportunity to accelerate the path to flexible service deployments operators need for data center interconnect, secure VPN, real-time content delivery, and other high-value services—the point of pursuing agility in the first place.

Will OTS evolve to support multilayer packet and optical operations in Infinera’s portfolio? Will it adapt easily to additional SDN control tiers beyond Pacnet’s REST-based PEN? We expect the odds are ‘yes’ though time will tell. In the meantime we can appreciate the innovation coming to market by introducing agility into the underlying network infrastructure that the OTS solution provides.

For more information about ACG's services, contact sales@acgcc.com.


Paul Parker-Johnson
acgcc.com 

Thursday, September 18, 2014

Infinera’s Cloud Xpress: Impressive Contribution to Cloud Providers’ DCI

Growth of cloud-based services shows no real signs of slowing down. This adoption rate is propelling providers of cloud services to construct new data center capacity, work to make data centers they already have run more efficiently and improve how they network their data centers internally.

In early adoption phases of cloud, there have been two dominant uses of data center interconnection (DCI). First is for connecting enterprise data centers to service providers’ data centers for hybrid and public cloud computing services.  The second use has been to connect providers’ ecosystem partner data centers to SPs’ data centers to mash up applications and federate cloud services.

As usage has grown, though, a new set of DCI requirements has emerged. These involve connecting providers’ own data centers at very high capacities. Two scenarios dominate this trend. The first is in metro or nearby data center connections, and the second is in hyper-scale data centers deployed at great distances from other sites and running at remarkable scale.

In the first use case operators will run out of power or space in existing sites and need to create additional capacity nearby. This can be in a metro area footprint or in an extended campus. DCI is critical in these deployments because many cloud applications work in a highly distributed model. They often need access to resources in neighboring data centers many times over before responding to a single user’s request. Thus, interconnections need to be simple and fast.

In the second deployment scenario, hyper-scale operators such as Google, Facebook and Microsoft search for remote locations where land and power are less expensive and build some of the world’s largest data centers there to run their services. Server counts in these sites range from 200,000 to 500,000 or more. The need for integration with systems in the providers’ other data centers is strong in mega-site deployments as well. This leads to extremely large capacities of DCI bandwidth being deployed both locally in clustered DC locations as well as over long haul transport for sites that are a half a continent or more away.

DCI capacities required in the intra-provider configurations range from 10s of tb/s in medium-to-large scale sites to several hundred tb/s in the largest mega-center locations. Because of the ongoing growth in the use of providers’ services, the unique needs of these DCI deployments have led to the emergence of a new type of high-capacity DCI solution.

Five requirements define the new breed:
  • Efficient and flexible scaling to 100s of tb/s of transport
  • Compact, rackable form factors
  • Low power consumption
  • Simple operation
  • Programmability for integration with service automation 

Underpinnings of these requirements
A dominant aspect of cloud data centers is use of infrastructure such as servers and storage systems that are modest in unit size but able to be pooled in wide ranges of capacity to serve the needs of application or service. This leads to a bias for systems installable in compact, rackable form factors that are easy to install and expand, often leveraging auto-configuration for integration into infrastructures at very large scale.

Form factor compactness demands low power consumption. If an individual server consumes, say, 150 watts in ongoing use, a rack of 40 such servers might consume 6 kilowatts, sustained. A data center with 100,000 such servers might consume 15 megawatts (approximately estimated). It’s easy to understand why cloud providers focus on wringing every possible watt out of solutions they deploy. DCI platforms designed in a more server-like package (versus a telco office orientation) are likely to consume less power, perhaps drawing a third less power per rack than alternatives. Across 10 racks’ worth of devices, if 150-200 kilowatts of power can be saved, a solution is heading in the right direction.

A final objective that fits with the ability to pool resources goal is to support open, programmable software for DCI capacity to be dynamically provisioned according to application needs. A variety of approaches can be taken to achieve this, including plug-ins for service control software rapidly evolving for use in cloud and virtual networking infrastructures as well as API toolkits to let large cloud providers integrate with their own service management platforms. In the end, programmability to support adaptation to providers’ goals for resiliency, path allocation, and application-driven solutions are the key requirements.

This new breed of DCI solution will complement other transport solutions that implement shared network transport of various types in metro and long haul configurations. The two styles will be used by providers for different types of connections. Both will be used to support higher level service requirements for customer, partner, and internal operator data center connections.

The Cloud Xpress family introduced by Infinera is an innovative example of the kind of high capacity, small form factor, programmable DCI platform cloud operators are leaning toward for their internal DCI deployments. Cloud Xpress is initially targeted for metro deployments. Leveraging optical innovations Infinera has previously introduced and engineering them into a platform capable of 20+ tb/s in a single rack, Cloud Xpress is an impressive contribution to the state of the DCI art. If trials prove out successfully, Cloud Xpress has every prospect of helping cloud operators scale out their data center deployments and interconnect them with the capacity and elasticity they desire.


For more information about ACG’s services, contact sales@acgresearch.net.


Paul Parker-Johnson

Friday, March 15, 2013

Business Case for Shared Mesh Protection


Current approaches to network resiliency are inadequate to meet evolving network performance and cost requirements. Existing schemes such as 1+1 protection meet the sub 50 ms performance requirement but only protect against single failures and are too costly. Best-effort approaches such as software-based GMPLS mesh restoration are cost effective, handle multiple failures but do not meet the sub 50 ms performance requirement. MPLS FRR can protect against multiple failures and achieve local 50 ms performance but requires longer time frames for end-to-end convergence and uses more costly router ports. 

Infinera is implementing a new standards-based approach called Shared Mesh Protection (SMP) for network resiliency. ACG Research conducted a total cost of ownership (TCO) comparison of SMP versus 1+1 protection. The comparison is made for the TCO of line-side 100 Gbps WDM interfaces using a national reference transport network and a five-year study. It models traffic patterns to/from data centers, cable landing sites, and metro areas. Traffic increases at 85 percent CAGR over the study period. The comparison shows that the TCO for protection resources in SMP is 27 percent less as compared to 1+1 protection. The TCO savings result from the use of shared bandwidth managed by network intelligence to protect against multiple failures versus dedicated backup resources for single failure protection used by 1+1.



mkennedy@acgresearch.net
www.acgresearch


Monday, June 18, 2012

Switching Architectures and Implications on Network Efficiency


Operators realize that in order to better monetize their networks they must significantly reduce their time to market delivery for new services while keeping capital and operational expenses under control. These challenges require a newer and more efficient network. Almost every operator is examining changes to their core and metro architecture to address these needs, either by migrating to a new architecture or technology or with an overlay of one. 

In  "Switching Architectures and Implications on Network Efficiency" we discuss new architectures, why service providers like them, and which architectures best promote network efficiency and flexibility.


For more information about Eve Griliches, click here.




egriliches@acgresearch.net 
www.acgresearch

Thursday, March 15, 2012

Operational Efficiency of Super-Channels

Eve Griliches, ACG Research, and Dave Welch, Infinera, discuss the benefits of flexible modulation and OTN integration


Watch Eve Griliches, packet optical analyst at ACG Research and Dave Welch, cofounder and executive vice president at Infinera, explain, using an example, the operational efficiency of super-channels, the benefits of flexible modulation and OTN integration.

Infinera has integrated the OTN fabric into the DTN-X, providing significant operational benefit while meeting power, footprint and bandwidth requirements. With bandwidth outpacing chip development, super-channels are the key technology to enable higher bandwidth deployment today.

Eve and Dave explain why 16-QAM is limited in reach for all vendors and carriers, and what the implementation penalties are. They also discuss the increased spectral efficiency and how it is still a win-win for shorter, highly concentrated routes and how with one line card in the DTN-X, you can configure sub-terrestrial distances and high bandwidth congested routes, which enables a new paradigm for optical networking.


Tuesday, February 28, 2012

Infinera Number One in 2011 North American Long Haul WDM Market

Our reports indicate the Long Haul market is experiencing another growing cycle and we expect this market to continue to expand over a couple of years,” said Eve Griliches, Principal Analyst at ACG Research. “Infinera is well positioned to sustain its number one rank in this market, especially with the Infinera DTN-X platform coming to the market.”


Thursday, September 15, 2011

Infinera Introduces the DTN-X: Innovation from the Ground Up

At 5T Infinera's DTN-X is the largest integrated WDM/OTN product today; it is upgradable to 10T per bay and will include a 100 Tb/s system in the future. In various configurations the DTN-X can support up to 24T on a single fiber.

With 150 patents backing it up, Infinera is introducing the DTN-X with a "clean slate design." The DTN-X uses the 500G coherent technology Photonic Integrated Circuit (PIC) combined with large-scale OTN switching and grooming. The backplane supports 1 Tb/s/slot and is designed to support a multibay approach to larger configurations. This is the largest OTN/WDM switched transport product announced to date.

Eventually, the DTN-X will support MPLS with LSR functionality to enable statistical multiplexing. Perhaps the most unique quality of this new product is its uncompromising scale. It can support up to 5T of WDM line side 500G super channels or 5T of client side services, all with nonblocking OTN switching with the ability to “dial up or down” without tradeoffs. In various configurations the DTN-X can support up to 24T on a single fiber.

Click here to download the PDF of Eve Griliches' Market Impact.



egriliches@acgresearch.net
www.acgresearch.netLink

Thursday, August 25, 2011

An Update on Photonic Integration

Photonic integration might be the most disruptive technology to hit the telecommunications market in a decade. Eve Griliches

In 2006 interest in the all-optical network was fading, and photonic integration was generating excitement and controversy in the industry.

The market traveled in two directions 1) photonic integration with coherent DSPs and 2) discrete components with coherent DSPs. Infinera is the only vendor that has a photonic integrated circuit (PIC) based system solution. The 10G economics of the PIC were disruptive, but Infinera included integrated OTN switching, which improved network efficiency.

Click here to download the PDF.



egriliches@acgresearch.net
www.acgresearch.net