Learn more about the OneAccess portfolio : an extensive range of physical or virtualized access platforms: multi-service routers, Ethernet access devices, white-box CPE and VNFs for the agile delivery of managed enterprise services
Learn more

WELCOME TO

EKINOPS BLOG

How SD-WAN is Forcing Service Providers to Re-invent Themselves (…as NFV did with vendors)

stone-henge-1566686_1920

Interesting similarities can be observed between SD-WAN and NFV. NFV aimed at ending the reign of monolithic solutions, with software and hardware decoupling, but also with the use of open APIs that facilitate the ability to pick, choose and replace vendors as necessary. The hope was that increased competition and commodity hardware would drive costs down. This is however not a particularly shiny prospect for vendors.

If you think about it, SD-WAN is also about breaking monoliths: service provider offers. From this perspective, the managed service offer is split into discrete components: connectivity, hardware, WAN/LAN devices, VPN/security and operations. Enterprise customers can theoretically source each item independently and benefit from choice and competition. As a result, a large community of SD-WAN advocates aggressively target service providers, accusing them of offering less and charging more; in other words ripping off enterprise customers. This is equally not a shiny prospect for service providers.

The right answer for CSPs is not necessarily to fully embrace the current offerings of the main SD-WAN players. It may be part of the answer, but not the full one. As with vendors for NFV, this turmoil is forcing service providers to reconsider the value and strategy for each individual component of their offer (connectivity, hardware, operations, etc.

Let’s start with connectivity and VPN. Many SD-WAN vendors build their business case on saving MPLS costs. If you are a CSP and can serve your customer with MPLS easily, this is just a bargaining game. Why SD-WAN then? Just call your service provider sales rep! Many commentators now admit that MPLS is not dead but SD-WAN has highlighted how un-ideal it is: if you need to build a global VPN with branches in Mexico, South-East Asia, etc., those MPLS links become awfully expensive and slow to deploy. This is where it makes sense for CSPs to adopt Over-The-Top VPN technologies as proposed by the mainstream SD-WAN solutions. In other words, service providers do not need full-blown SD-WAN technologies to remain competitive so long as the customer demand is limited to “same as before, but lower cost”.

Of course, there is more to SD-WAN than just building a network overlay: such as being able to easily enforce application policies, monitor network and application performance. In my opinion, this should be viewed as another layer of services that can be offered at a premium cost. Historically, service providers have been extremely successful in outsourcing networks for enterprise customers, especially in Europe. The key ingredients were: a one-stop shopping experience and being a price leader for this outsourcing. The contention here is that they can strike back against SD-WAN DIY and System Integrators. Being a price leader implies they need entry-level Over-The-Top offers, but also a rich set of options to upsell so that enterprises remain attracted to their main marketing asset: a one-stop shopping experience.

Why NETCONF beats OpenStack when managing uCPE

site-541197_1920

I know this question reads like comparing apples with oranges. In fact, I want to compare a uCPE managed with NETCONF versus a uCPE managed as an OpenStack compute node. First, it’s worth looking at why people think using OpenStack is a good idea.

OpenStack is THE open-source Virtualization Infrastructure Manager (VIM). As operators virtualize their data centers, OpenStack is pretty much the only solution if they want to move away from VMware. As operators move to OpenStack, they can extend it to manage Virtualized Network Functions (VNFs) in branches. The benefit looks obvious: one team in charge of both data center and branch virtualization and a streamlined software layer to deploy and manage.

In many ways, NETCONF vs OpenStack is a political decision. One that determines whether a combination of DevOps culture and IT teams will take power.

OpenStack: Free as in freedom, not as in price

OpenStack is not a piece of software, but rather a distribution of modules with heterogeneous documentation. More importantly, putting OpenStack into production requires the gathering of a critical mass of skilled experts. You need a good set of experienced architects who know what works, what does not and what will fail if you do things in a certain way. One well-known issue with OpenStack, for example, concerns planning upgrades. This is not a trivial job; VMware still makes a lot of money and Redhat or Mirantis put massive effort into packaging and supporting OpenStack.[1]

OpenStack was conceived for data centers

Using OpenStack for uCPE is about deploying VNFs where it makes sense, i.e. the trade-off between cost, latency and security. It implies VNFs would be seamlessly chained between a datacenter and branches. Extending OpenStack to the branch, however, poses challenges, the biggest of which is security. When VNFs are contained within a data center, the security perimeter is clearly defined. In contrast, deploying VNF in branches means the security boundaries are more open-ended as connections can be made to new customer branches. In this case, openness and security often contradict each other.

See full post

SD-WAN at the CSP: Why gray solutions are slowing success

blue-2137092

It’s striking the number of service providers that have recently started promoting SD-WAN offers, especially given that there is little evidence amongst them of its commercial success. Popularity and perceived demand are high in the telecoms world, but it seems few service providers have yet to really win over end-users.

Just another example of industry hype? Or is there more to SD-WAN than meets the eye?

There certainly is. Or rather, there will be soon. The increasing move towards a true white-box approach will undoubtedly be the driver in realizing SD-WAN deployments at the CSP and generating real value behind the hype. So, why have they lagged?

The SD-WAN romance

The appeal of SD-WAN is its software-based, flexible and fully programmable nature. The premise of a service delivered using Common-Off-The-Shelf (COTS) hardware offers the illusion of freedom. COTS hardware suggests that users are no longer locked into one solution, as new Virtual Network Functions (VNFs) can be added from a broad choice of third-party vendors.

As there is no ‘one-size-fits-all’ solution, the end-user can be critical and selective of the services it chooses, based on its own budget and requirements. In turn, this means they can benefit from competition, as vendors can battle to deliver more innovative, cost-effective services.

See full post

Fake Virtualization

Fake Virtualization

Pravin Mirchandani, CMO, OneAccess (an EKINOPS brand), dissects a poignant phrase coined at this week’s SDN NFV World Congress.

On day one of this year’s SDN NFV World Congress in The Hague, I was intrigued by a phrase used in the keynote: ‘fake virtualization’.

The speaker, Jehanne Savi, Executive Leader, All-IP & On-Demand Networks programs at Orange, used it to describe to describe the status of the industry. It was a powerful accusation but, aside from being tuned in to the zeitgeist of our times, what did she really mean?

Did she mean vendors were trumpeting false claims about their solutions? Or was she suggesting that they were failing to disaggregate network functions properly? Perhaps she was pointing to a failure in software abstraction, or a lack of software-defined control or, even worse, a recourse to proprietary APIs or protocols. The longer I thought about it, the more interpretations sprang to mind.

Her true meaning became clear towards the end of her keynote, when she used the words ‘vertical solutions’ as a direct translation for her neologism. Only then did the links between her various arguments appear. This wasn’t a single instance of fake news or testimony, but rather a broader lamentation on the lock-in nature of mainstream virtualization solutions; on the industry’s failure to enable an open, multi-vendor landscape that would give operators the choice and flexibility they sought to build new competitive solutions.

See full post

Unpacking the technologies behind the Zero-Touch Provisioning of a universal CPE

uCPE

Explore the combination of technologies that enable remote provisioning and management of the uCPE and the VNFs this powerful new device supports.

A Universal Customer Premises Equipment (uCPE) consists of both software and hardware components to create a small virtualization platform at the customer premises and which is capable of running multiple Virtual Network Functions (VNFs) in a local service chain. This is similar to running Virtualized Network Functions in the datacentre but at a smaller scale. This enables Communication Service Providers (CSPs) to disaggregate software and hardware at the CPE level and provides them with unprecedented flexibility to run any type of service on the same commoditized hardware platform.

The services delivered by this programmable end-user device are in general controlled by Next Generation Service Orchestrators, who take care of the service configuration aspects of the delivered services. Another level of Orchestration concerns the deployment of the uCPE in the field. One of the challenges is to minimize its deployment cost using zero-touch provisioning. Pushing a new configuration to a uCPE is more complicated than to legacy CPEs because not only the configuration of the uCPE needs to be pushed to the device, but also the service chaining topology and the VNF images with their initial configurations. Using the NETCONF/YANG protocol however it is possible to push the complete initial configuration to the uCPE including the service chaining configuration and the VNF images with their initial start-up configuration. The initial communication with the provisioning server can be achieved using the NETCONF Call Home functionality, which allows the CPE to identify itself to the provisioning server and receive the correct configuration associated with the customer where the device is installed.

With zero-touch provisioning it is possible to install a uCPE and its configuration in an automated way. In many cases, however, end-to-end orchestration systems don’t support zero-touch provisioning yet or provisioning systems are not in place or sufficiently mature to support this level of automation.

In addition to the OneAccess-branded uCPE hardware (OVP or Open Virtualized Platform) and software (LIM or Local Infrastructure Manager), EKINOPS also offers OneManage to provide a solution for zero-touch provisioning of uCPEs based on a service catalog. OneManage supports a northbound interface to interface with OSS/BSS systems to receive customer-related data associated with a new uCPE deployment. In this way OneManage is an infrastructure orchestrator or sub-orchestrator, taking care of the provisioning of the uCPEs and the management of the installed uCPE base.

See full post

Fast Caterpillars: Solving the Complexity Crisis in Carrier Automation

blog_fast_caterpillar

A new focus on simplicity and standardization holds the key, says Pravin Mirchandani, CMO, OneAccess.

At the recent Zero-Touch and Carrier Automation Congress in Madrid, I was reminded of a quote from George Westerman of MIT:

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar.”

One of the most debated topics at the show was how to deal with issues of complexity when enabling automation in operators’ networks. The current processes designed to manage these networks have been designed to make human intervention and management relatively efficient. In the new world of automation, however, even minimal human intervention is problematic. Despite the industry’s best efforts, translating complex human decision-making into a computer-readable, algorithm-friendly process remains a real challenge.

Redesigning around simplicity

Indeed multiple speakers at the show agreed that trying to automate a complex set of processes was actually the wrong approach, and that a fresh take was needed. Starting again, to define new, simpler processes which focus on enabling zero-touch provisioning and automation, offers a more logical and achievable route forward.

See full post

Agility is the Goal and Lack of it the Handbrake

blog_agility

When looking at the virtualization landscape in its broader sense I can’t help seeing that the real problem is agility or rather lack of it.

Take a look at SDN, whose main premise is to separate control from the data plane and thereby achieve high agility for its users. Same thing for NFV: abstract the software from the hardware and then combine and deploy software functions at will for high agility. So the operators, at least the bigger ones who have the least agility, are investing megabucks to deliver on the twin promises of SDN and NFV. What’s frustrating for them is that they are doing so lamentably slowly and in a decidedly non-agile manner.

In a sense it’s not so surprising. In effect, the operators are trying to re-engineer 15 years of painstakingly built managed services processes in a very short period of time. And they are doing so using a combination of new operational and management techniques, new albeit commonly available hardware, re-packaged software and co-opting their existing set of waterfall processes and organisational silos. This is like pressing on the accelerator with the handbrake on. Some, e.g. Verizon and belatedly AT&T, are trying to loosen the rigid structure of their organisations to create a more agile decision-making process and a Devops culture where network specialists and IT folk combine to break down internal barriers and loosen their internal disk-brakes.

It’s when we take a look at the arrival of SD-WAN that we see the real impact of lack of agility. SD-WAN vendors position their wares to enterprises with self-select portals, centralised policy control and the ability to take advantage of cheaper Internet links to deal with all the Cloud-based, Internet services that are swamping their traditional operator-provided WAN connections. This puts control and thus agility in the hands of the enterprises and this is precisely what they want.

The response of the operators to SD-WAN however is telling. As opposed to SDN and NFV, where they are re-engineering their processes with open APIs to (slowly) deliver on the promise of virtualization, they have taken a very different tack with SD-WAN. For the most part, they have:

See full post

Why the time to move to NETCONF/YANG is now

time_to_move_netconf

With backing from major NFV/SDN projects and industry organizations, NETCONF, the client/server protocol designed to configure network devices more clearly and effectively, will soon become ubiquitous. Operators who migrate to NETCONF can both future-proof their operations for NFV and can also reap some of short-term benefits of automation, today.

Why NETCONF/YANG?

By addressing the shortcomings of existing network configuration protocols like Simple Network Management Protocol (SNMP) and Command Line Interface (CLI), NETCONF was developed to enable more efficient and effective network management. SNMP has long been cast off by operators and CSPs as hopelessly complicated and difficult to decipher. CLI may be readable by a network engineer, but it is prone to human error and can lead to vendor lock-in, since propriertary implementations often mean that only one vendor’s element management system can manage their network elements.

NETCONF, on the other hand, is designed specifically with programmability in mind, making it perfect for an automated, software-based environment. It enables a range of functions to be delivered automatically in the network, while maintaining flexibility and vendor independence (by removing the network’s dependence on device-specific CLI scripts). NETCONF also offers network management that is not only human readable, but also supports operations like transaction-based provisioning, querying, editing and deletion of configuration data.

YANG is the telecom-specific modelling language that makes NETCONF useful, by describing device configuration and state information that can be transported via the NETCONF protocol. The configuration is plain text and human-readable, plus it’s easy to copy and paste and compare between devices and services. Together, NETCONF and YANG can deliver a thus far elusive mix of predictability and automation.

NFV needs NETCONF

What’s even more powerful is that NETCONF and YANG combined offer the flexibility needed to manage both virtual and physical devices. This means that operators can get going with NETCONF now, before they start ripping out old devices and replacing them with white boxes. This is a necessary investment in the future; as we look forward to a virtualized networking environment, where network functions are spun up and changed continuously, the high level of automation enabled by NETCONF/YANG is not just preferable, it is essential.

See full post

Dissonance in the World of Virtualization

dissonance-in-the-world-of-virtualization

A collection of thoughts has been spinning in my head over the last few weeks based on various customer visits, presentations and panel debates I have seen or participated in recently and they have collectively formed into a common notion, that of dissonance. These reflections have come about as a result of number of unresolved issues that are either being ignored or are falling between the cracks as we all grapple with the complexities of introducing virtualization.

I’ll start with MANO. Many of us know that there is a competition between Open Source MANO sponsored principally by Telefonica’s I+D entity (its research and development arm) and ONAP (Open Network Automation Platform), which combines the outputs of AT&T’s homegrown MANO system called ECOMP and Open-O (Open Orchestrator Project) into a common open-source project. Setting aside the architectural differences between these two major MANO initiatives and their respective levels of industry take-up, what’s increasingly obvious is that both are horrendously complex and quite frankly beyond the means of most operators to implement. Open-source only saves the cost of acquisition, not of integration and deployment. That’s why many operators, both large and small, are sitting on the sidelines waiting for a viable solution to present itself. This might quite possibly be in the form of MANO-as-a-service, which is already attracting the attention of business managers at professional services outfits and even venture-capital funding, which reminds me that Sun Tzu said in the Art of War: “In the midst of chaos, there is also opportunity”.

Another dissonance that strikes me is that between the CTO offices in the vanguard of introducing virtualization and their own business managers. It’s not just that virtualization is proving horribly expensive to introduce and therefore difficult to convincingly flesh out an ROI spreadsheet. Rather the technical people have become so absorbed by the scope and scale of the technical challenges in front of them that they have collectively lost sight of the end goal: how to market virtualized network services and deliver benefits to their end-customers. A recent big theme at conferences has been the challenges involved in on-boarding new VNFs, over a 100 in some cases. My questions though are: who needs all this choice; how can customers without big IT budgets select which are the right ones for them; what is the benefit for these end-users as opposed to what they are doing today (see my recent blog entitled: ‘Hello from the Other Side: The Customer Benefits of NFV’); and indeed what’s in it for the operators – how are they going to make money when it’s not at all clear which VNFs they can effectively sell to the volume part of the enterprise market as managed services, i.e. where they make money?

There is also increasing dissonance on the vendor side – and recently several vendors have openly voiced their frustrations on this point - virtualizing products requires investment and until NFV starts moving out of the labs into volume service deployments we all are investing money based on the hope of generating payback at some uncertain point in the future.

The other huge dissonance is that all the investment in next-generation virtualization efforts has eaten most of the IT resources for introducing new services and platforms for the operators mainstream existing business. The number of delayed service launches atoperators due to ‘IT budget issues’ or ‘lack of available personnel’ is now an industry-wide phenomenon and delaying cost reduction and business generation service initiatives in the market. This is ironic as virtualization is meant to accelerate service delivery, not stall it.

See full post

Hello from the Other Side: The Customer Benefits of NFV

hello_from_other_side

Operators should start their NFV charm offensive now says Pravin Mirchandani, CMO, EKINOPS.

Surprisingly few of today’s conversations about NFV address how a virtualized infrastructure will benefit end-user businesses. If operators and CSPs want to bring their customers across without resistance, however, they must also consider how to position the benefits of NFV, particularly since some may view its new technologies as an unsolicited risk.

Flexible provisioning is the big persuader. Most businesses experience peak traffic at infrequent and predictable times. Despite only needing this level of service occasionally, however, they have had no option but to sign contracts that deliver this maximum capacity all the time. Operators haven’t had the flexibility to deliver anything else. Virtualization can change all of this.

This is big news because firms of all shapes and sizes are caught in this trap. Pizza companies do 90% of their business on Friday and Saturday nights, yet provision for these peaks seven days a week. This is a huge overhead, particularly for a national networked outfit like Domino’s. Other businesses, particularly retail e-commerce sites, are governed by seasonality. The spikes in online sales triggered by Black Friday and the January sales are well documented. Hotels beef up their network capacity for four intensive months of high occupancy but, under today’s model, must provision equally for the quieter eight.

Virtualization can address this need. A software-defined infrastructure will give customers access to a portal through which they can self-select the services they need and specify when they need them. The programmatic benefits of virtualization enable operators to spin them up automatically. Compare that to the weeks of notice that an operator currently needs to install a new line and the end-user benefits come into stark focus.

See full post

Acceleration techniques for white-box CPEs

acceleration-techniques-for-white-box-cpe

his blog will provide a quick introduction and comparison of some of the available acceleration technologies common on white-box CPE, sometimes also referred to as “universal CPE” or uCPE.

Classical premises equipment has traditionally relied on specialized network processors to deliver network processing performance. Standard x86 hardware however, which was originally designed for more general purpose compute tasks, especially when used together with a “plain vanilla” Linux implementation, will result in disappointing performance levels for data communication purposes unless expensive x86 CPUs are used. To address this concern, a number of software and hardware acceleration techniques have been introduced to meet the performance requirements imposed on today’s CPEs.

The processing context for white-box CPE is an environment that provides a small virtualized infrastructure at the customer premises, where multiple VMs (Virtual Machines) that host VNFs (Virtual Network Functions) are created and hosted in a Linux environment, Service chaining is established between the different VMs, resulting in a final customer service which can be configured and adapted through a customer portal. In this setup VNFs and VMs need to communicate either to the outside world through a NIC (Network Interface Card) or to another VNF for service chaining.

DPDK (Data Plane Development Kit)In the case of white-box CPEs, DPDK provides a framework and set of techniques to accelerate data packet processing and circumvent the bottlenecks encountered in standard Linux processing. DPDK is implemented in software and basically bypasses the Linux kernel and network stack to establish a high-speed data path for rapid packet processing. Its great advantage is to produce significant performance improvements without hardware modifications. Although DPDK was originally developed for Intel-based processor environments, it is now also available on other processors such as ARM.

AES-NI (Advanced Encryption Standard New Instructions)

This is an extension to the x86 instruction set for Intel processors to accelerate the speed of encrypting and decrypting data packets using the AES standard. Without this instruction set, the encryption and decryption process would take a lot more time since it is a very compute-intensive task. The encryption is done at the data plane level and is used to secure data communications over Wide Area Networks.

See full post

The Song of the Sirens: Five ways to spot hidden NFV vendor lock-in

the-song-of-the-sirens-five-ways-to-spot-hidden-nfv-vendor-lock-in

One of the big attractions of NFV is that it gives operators a chance to break free of single vendor contracts and establish greater control over the future development of their networks.

Genuinely ‘open NFV’ gives operators the ability to change tack according to technical and commercial developments. It enables them to shop around for best of breed solutions and blend a mixture of, say, migration-oriented hybrid solutions with white or grey-box CPEs and connect them all to their choice of orchestrator. It also dramatically improves their negotiating power.

Yet, despite appearances, few NFV vendors practice ‘genuinely open NFV’ and instead disguise how they intend to close and lock the front door once their customer has stepped inside.

There are five common traps that vendors set for operators as they entice them toward ‘open NFV’ contracts:

#1 Charging for third-party connections

See full post

US Telcos Move Ahead on NFV as Europe Watches: 2017 Predictions

virtu2017

While huge progress has been made this year toward the development of virtualized services, a variety of challenges remain before widespread service roll-out can commence. 2017 will be the year that the industry takes positive steps to overcome these barriers, says Pravin Mirchandani, CMO at OneAccess Networks, who shares his five key predictions for the year ahead.

#1 Europe will stay on the sidelines as USA and Japan push on

We will see virtualized services start to roll out, principally in the USA, and to a lesser extent in Japan through NTT. While everyone remains sure that the way forward is virtualization, the industry is still figuring out how to solve the multiple technical, commercial and organizational challenges posed by migration.

Operators will be closely watching AT&T and its Domain 2.0 program and keeping an eye on Verizon too, in a bid to learn lessons about how to implement NFV. Europe’s hard-pressed operators, in particular, will mostly stay parked in ‘watch and learn’ mode, continuing with RFx, proof of concepts and trials. In fact, we’re unlikely to see any virtualized services roll out across Europe in 2017. Compelling business cases are harder to assemble in the European continent and, until these are squared away, operators will prefer to observe how US and Japanese trail-blazers facilitate service migration and preserve their choices – both major factors driving current and near-term investment decisions.

#2 NFV’s ROI equation will need to be cracked

See full post

ROI for white-box CPEs: a question of segmentation

ROI

Operators commonly expect that moving to SDN and NFV will enable cost reductions, through efficiency gains in network management and device consolidation. As they get closer to deployment, however, commercial reality is telling a different story, depending on the market segment being addressed.

One area where the ROI for white-box CPEs is easy to justify is appliance consolidation. If you can consolidate a number of proprietary appliances into a single white-box CPE then the CAPEX savings are clear. Chaining, for instance, a vRouter, a WAN optimizer and a next-generation firewall into one single x86 appliance (i.e. a white-box CPE) delivers immediately identifiable cost savings: one appliance instead of three is basically the formula and this is a commonly targeted combination. Combine this with the prospect of increased wallet share from large enterprises, which often run their networks themselves in do-it-yourself mode, and the large enterprise segment looks increasingly attractive for operators.

Let’s be clear, though: this is just a large enterprise play. SMBs and multi-site deployments for government or highly distributed organizations have no need for WAN optimization and little need for a next-generation firewall; the on-board firewall that comes with their router together with a PC-based anti-virus subscription and email antispam service are usually sufficient. As a result, anyone working on building business cases for white-box CPEs for the volume part of the market will attest that ROI a tough nut to crack.

The draw for this market segment is the potential to increase ARPU by making it easier and more flexible to use additional services through automated service delivery via virtualization.

In terms of hardware CAPEX, the cost of white-box CPE deployment outstrips that of traditional CPEs. For the large enterprise segment which often deploys multiple appliances, this cost increase is compensated by reducing the number of appliances. For other market segments, where a single CPE is more typically deployed, savings need to come from OPEX reductions or TCO savings. The latter, however, is notoriously difficult to calculate and is usually irrelevant in a context where staff reductions are difficult to achieve, particularly in a period of technology transition.

See full post

Pushing through Glass Ceilings at the SDN World Congress 2016

imageSDNWorldCongress_TheHague_2016

Live from the SDN World Congress Show in The Hague, Pravin Mirchandani, CMO, EKINOPS, reflects on the industry challenges steering the dialogue at this year’s conference.

In Gartner’s hype cycle, there is an inevitable time of disillusionment that follows the initial excitement of a new technology. At the SDN World Congress this feels different: although we have probably passed the peak of inflated expectations, there is less a trough of disillusionment, rather a set of major impediments that need to be cleared away in order to achieve the nirvana of SDN/NFV. Most actors can see what needs to be done and are steadfastly supporting the initial objectives but my impression is that breaking through to achieve the goals of network virtualization is like pushing through the famous glass ceiling. Though not created by prejudice, as in the traditional definition of glass ceiling, the barriers are real and there are many.

Glass Ceiling 1: Complexity One of the goals of software-defined networking is to reduce dependence on an army of network experts, who are difficult to recruit and retain, expensive to hire and prone to error. What’s clear is that what they do is indeed complex; and converting their expertise and processes into automated software processes and APIs is equally if not more complex, as there is a distinct lack of established practices and field-proven code to draw upon. Many of the speakers at SDN World Congress mentioned the issue of complexity and this was a constant theme in the corridor discussions. Laurent Herr, VP of OSS at Orange Business Services stated that Orange estimated it would take 20,000 man-days to convert their tens of IT systems to achieve virtualization.

Glass Ceiling 2: Culture Another common theme was the issue of culture. Telcos have been organised to deliver the ‘procure-design-integrate-deploy’ cycle for new services and have a well-established set of linear processes and organizational silos to achieve it. Introducing virtualized services however requires a DevOps culture based on agility, fast failing (anathema to the internal cultures of Telcos) and rapid assembly of multi-skilled teams (especially collaboration between network and IT experts) to deliver new outcomes, frequently, fast and reliably. Achieving a DevOps culture was one of the most frequently cited challenges by the Telco speakers at the Congress. Another common word they used was transformation.

Glass Ceiling 3: Lack of Expertise It’s difficult to estimate the number of engineers that really understand the principles and practices of virtualization but they probably number in the low hundreds across the globe. Given the ability of the vendors to pay better salaries, it’s a safe bet that the majority work for them rather than for the Telcos. Growing this number is difficult as it requires combining IT, programming and network skills. Creating collaborative teams helps but finding or training people to achieve mastery of the different skills is a challenge for the whole industry. This was more of a corridor conversation rather than openly cited by the speakers but it is a glass ceiling nevertheless.

See full post

Fifty Shades of NFV?

fifty_shades_of_nfv

In the racy world of CPE architecture, what virtualization-hungry service providers say they want isn’t always what they need, says Pravin Mirchandani, CMO, EKINOPS.

Alright, perhaps ‘racy’ is going a bit far, but as the virtualization industry moves out of ‘does it work’ and into ‘let’s make it happen’, pulses are certainly starting to quicken. Not least because service providers are having to make tough calls about how to architect their management and orchestration (MANO). Many of these decisions revolve around the deployment of virtualized network functions (VNFs), via some form of customer premises equipment (CPE).

Several ‘shades’ are emerging, each with their advantages and drawbacks.

The ‘NETCONF-enabled CPE’ model emulates what we have today: a fixed number of physical network functions (note: not virtual) are embedded into a traditional L3 multi-service access router. The key difference here is that the router, as its name suggests, supports the NETCONF management protocol and can, as result, be managed in a virtualized environment. In truth, this is a pretty rudimentary form of virtualization; the router can be managed by a next-generation OSS with NETCONF and its embedded physical functions can be turned on and off remotely, but that’s about it. The device is not reprogrammable, nor can its network functions be removed or replaced with alternatives. The market for this deployment model lies in two use-cases: Firstly, as a bridging solution enabling service providers to co-operate traditional and virtualized network services simultaneously, facilitating migration. Secondly, given that many of today’s VNFs are heavy and need considerable amounts of memory and processing resources in order to operate, the more flexible white-box alternatives are costly in comparison. Specialist vendors like OneAccess have been developing dedicated CPE appliances (with embedded PNFs) for years, where compact and efficient code has always been a design goal in order to keep appliance costs under control. For more conservative operators that are keen to get ‘in the game’, the proven reliability and comparative cost efficiency of this model can offset its relatively limited flexibility. Rome wasn’t built in a day and some operators will prefer to nail the centralized management and orchestration piece before investing heavily in pure-play virtualization appliances for the network’s edge.

A purer approach is to invest in a ‘thick branch CPE’ or, in other words, an x86-based white-box solution running Linux, onto which VNF packages can be either pre-loaded and, in the future, removed and replaced or even selected by customers via, say, a web portal. This approach delivers far greater flexibility and is truer to the original promise of NFV, in which the network’s functions and components can be dismantled and recomposed in order to adjust a service offer. The snag, however is that white-box CPEs come at a cost. More memory and more processing power mean more cash. That’s why the race is on to develop compact VNFs, so they can minimize processing requirements and, as a result, enable a limited spec white-box to do more, with less. Again, unsurprisingly, those ahead of the curve are VNF vendors that have the experience of wringing every last drop of performance out of compact and cost-efficient appliances, purpose-designed for operators and service providers.

See full post

Thick & Thin: A Taxonomy of CPEs

taxonomy_cpe

In presentations at virtualization conferences, and in our discussions with operators and service providers, there remains a lot of confusion surrounding the terms ‘thick’ and ‘thin’ as they relate to customer premises equipment (CPE). This is because the terms are used interchangeably, to describe different market segments, the density of network functions as well as the nature of the CPE itself.

The roots of ‘thick’ and ‘thin’ comes from the term ‘thin client’; a popular reference to a lightweight computer or terminal that depends heavily on a server or server farm to deliver data processing and application support. This contrasts with the PC, which performs these roles independently, and was somewhat disparagingly referred to as a ‘fat client’, or, more neutrally, as a ‘thick client’.

This heritage is important as we look to provide a taxonomy of CPEs, which will hopefully aid our understanding of their respective roles in the delivery of virtualized network services.

Generically, CPE or ‘customer premises equipment’ refers to the equipment provided by a service provider that is then installed with its customers. Historically, CPE referred mainly to the supply of telephony equipment, but today the term encompasses a whole range of operator supplied equipment including routers, switches, voice gateways, set-top boxes as well as home networking adapters.

Thick CPE refers typically to a router or switch that provides network functions at the customer premises. There are now three main types:

See full post

The White-Box CPE: Separating Myths from Reality

the-white-box-cpe-separating-myths-from-reality

In the world of customer premises equipment (CPE), the white-box is a new idea. It should then come as no surprise that misconceptions among operators and CSPs about what constitutes the white-box CPE are common. Here are four of the most prevalent.

Myth #1: The white-box CPE is just commodity hardware.

No. The software-hosting platform is the most important part! Google and Apple have succeeded with smart phones because they created killer platforms, carefully designed to provide a secure environment ready for third party developers to exploit, which enabled their apps base to proliferate. The white-box CPE is no different. Don’t get me wrong, it is still all about using commodity hardware, but this is just the tip of the iceberg. It’s enabling potential stems from the software platform that controls the hardware, not from the hardware itself.

The same goes for the network functions running on the white-box CPE. They need to be installed, chained, activated, run, provisioned, monitored, charged and, of course, done so in a secure manner with maximum efficiency and predictability.We’re not talking about any old software here, either. This is highly specialist, instance-specific software but unlike smartphones the functions are often dependent on each other. There are lots of open-source components that can carry out each of these tasks individually, but they also need to be integrated, managed with homogeneous, standardized APIs and provided with pre-packaged, documented use cases to facilitate their integration. We often see service providers adopting a DIY approach. But this only lasts until they realize the extent of work and the depth of know-how required to assemble all these pieces together. Building a demonstrator is one thing; warrantying the operational lifecycle over the lifetime of a service is another.

Myth #2: White-box CPEs are for everyone.

The whole idea of white-box CPEs is to foster the ability to take VNFs from multiple vendors and have the freedom to mix-and-match them according to the service provider’s desired functions, price and brand.This is all good ‘on paper’. The reality, however, is different. Just like when specifying additional options on a car, the bill soon adds up. In fact, imagine being able to walk into a VW dealer and demand options from BMW, Mercedes, Honda and Ford. Welcome to the world of the white-box CPE!

Large enterprise can afford it because, right now, they are buying single-use network appliances and stacking them up in the customer’s premises. The white-box CPE’s promise of appliance consolidation is so great that the economics allow it to be expensive.

See full post

Orchestrating the network beyond the NFVI boundary

orchestrating-the-network-beyond-the-nfvi-boundary

A key promise of SDN/NFV is to enable service providers to offer innovative, on-demand enterprise services with unprecedented flexibility. Building a network capable of delivering flexible, dynamic, customer-tailored services, is however a challenge for service providers. As a matter of fact, end-to-end service orchestration within the virtualized infrastructure and across complex multi-vendor network domains outside the NFVI is anything but a walk in the park.

Standardization bodies, the acceptance of open source solutions and service provider led programs such as AT&T Domain 2.0, Orange SDN for Business or Deutsche Telekom Terastream have driven interoperability in the NFVI and MANO spheres. One major roadblock to capture the full potential of network virtualization is the orchestration of network elements beyond the NFVI boundary, in order to provide full service delivery automation across the network. Yet one key question is how can orchestrators work with networking elements outside the NFVI? As service providers look to transform their networks, the answer to this question is crucial in order to deliver end-to-end enterprise managed services.

The challenge for the orchestrator is to configure all the nodes in the service delivery chain from the customer premises to the NFVI and steer the traffic to the virtual service platform. This integration work across multi-vendor network domains can be substantial as many network devices have to be configured and managed, often with specific proprietary interfaces. Nevertheless, the development of vendor equipment adapters in the orchestration platform to cope with proprietary CLI and varying administration protocols is a prospective cost that industry players want to avoid and is based on a short term view of the problem.

Verizon’s white paper on its SDN/NFV strategy confirms that cross-domain and cross-vendor programmability is key to meet the dynamicity promised by SDN/NFV services. In order to replace a myriad of element management systems (EMS) tied to network elements with proprietary protocols and interfaces, Verizon recommends in the near term to use domain-specific SDN controllers to manage vendor-specific network elements.

Looking at the longer-term perspective, there is an opportunity to unify a diverse set of provisioning and configuration chains under a common NETCONF/YANG umbrella to simplify integration, operation and maintenance. NETCONF/YANG provides a perfect programmatic interface to prolong the orchestration domain and configure end-to-end service chains on-demand by spawning VNFs in the NFVI and steering traffic according to service requirements across the network.The significant traction of NETCONF/YANG in the multi-vendor orchestration space makes it a good fit to streamline the integration of end-to-end automated SDN/NFV services. The support of NETCONF/YANG by a number of commercial orchestration platforms (Ciena Blue Planet, WebNMS, Cisco NSO,...) and the standardization of YANG modules by the MEF (Metro Ethernet Forum) for the orchestration of Carrier Ethernet 2.0 services add momentum to an already fast moving trend toward a NETCONF/YANG end-to-end orchestrated network model.

See full post

Do CLI Experts Need to Worry about Their Jobs?

do-cli-experts-need-to-worry-about-their-jobs

Our customers tell us that NETCONF is fast becoming the preferred way to provision virtualized networking devices. Tail-f has been the main advocate of this protocol, with a pitch that goes roughly as follows: “NETCONF provides the toolset to configure an end-to-end service with a single network-wide transaction. OSS programmers spend less time in handling error cases –as the role of transactions is precisely to ensure success and rollback in case of errors. You do not need to pay high-salary CLI experts to handle such rollbacks manually, when the OSS has failed to do it in a proper way. And you can now automate service creation at an unprecedented level.”

The more you automate the less people you need and with the transactional capabilities of NETCONF you do not need them to recover the disaster of accidental automation glitches. That has got to be rather scary for the cohorts of CCIE-certified engineers and other CLI experts, doesn’t it? Does it mean their expertise will become redundant? Do they need to find a new mission in life?

I recently met some CLI engineers. They see the change coming but as they are buried by the demands of their day-to-day activities, they have not yet had the time to study and experiment with NETCONF. So, this protocol along with YANG data modelling remains very abstract, if not confusing to them. Of course they understand quite clearly that NETCONF is about a programmatic approach to the process of service creation. Network engineers thus understand they must acquire new skills in programming but it is certainly not their comfort zone today.

In many cases, an OSS will not manipulate YANG models or NETCONF directly. The likely way to program network services is to use tools that generate an interface code from the relevant YANG models and programmers will use that API to create the services. For IT engineers, service creation is not much more than mapping a Service Abstract Layer (SAL) data model to objects on a set of networking functions or devices, nothing more.

But that is not the starting point for network engineers. Their initial steps are still the same (as before NETCONF): create a network design, prepare a reference setup, elaborate configuration templates, write troubleshooting guides, etc., with the notable difference being that such templates must be written in NETCONF XML. With OneAccess products, this is a fairly natural step: first create the reference configuration using the familiar Command-Line Interface, then use some commands to export it as XML or a set of xpath statements. Using this process, CLI can be mapped to NETCONF pretty intuitively. In other words, an extensive CLI knowledge is still a valuable asset for engineers. Working with NETCONF is then an easy next step and defining XML templates is after all not so difficult.

See full post

Latest News

  • EKINOPS Opens New US Office

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of optical transport equipment and router solutions for service providers and telecom operators, today announces the opening of new North America Headquarters in Rockville, Maryland, USA.

     
  • EKINOPS and Passman meet demands of “bandwidth hungry” hospitality customers

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading provider of open and fully interoperable Layer 1, 2 and 3 network solutions, and international hospitality digital service specialists, Passman, today announced the availability of true 1Gb services-enabling routers, which will further enhance quality of service (QoS) delivery for Wi-Fi guest access services.

     
  • EKINOPS Centralizes Metro Ethernet SLA Monitoring & Service Activation for CSPs with new 10G Access Device

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading global supplier of telecommunications solutions for operators, today unveils a OneAccess 10G Ethernet Access Device (EAD) that will enable operators and communication service providers to offer high-speed Ethernet services to Enterprise and wholesale customers.

     

EKINOPS Worldwide

EKINOPS EMEA & APAC
Telephone +33 (0)1 77 71 12 00

EKINOPS AMERICAS
Telephone +1 (571) 385-4103

 

E-MAIL ALERTS

Receive automatically EKINOPS information in your inbox!