Menu

EKINOPS en français

WELCOME TO

EKINOPS BLOG

SD-WAN at the CSP: Why gray solutions are slowing success

blue-2137092

It’s striking the number of service providers that have recently started promoting SD-WAN offers, especially given that there is little evidence amongst them of its commercial success. Popularity and perceived demand are high in the telecoms world, but it seems few service providers have yet to really win over end-users.

Just another example of industry hype? Or is there more to SD-WAN than meets the eye?

There certainly is. Or rather, there will be soon. The increasing move towards a true white-box approach will undoubtedly be the driver in realizing SD-WAN deployments at the CSP and generating real value behind the hype. So, why have they lagged?

The SD-WAN romance

The appeal of SD-WAN is its software-based, flexible and fully programmable nature. The premise of a service delivered using Common-Off-The-Shelf (COTS) hardware offers the illusion of freedom. COTS hardware suggests that users are no longer locked into one solution, as new Virtual Network Functions (VNFs) can be added from a broad choice of third-party vendors.

As there is no ‘one-size-fits-all’ solution, the end-user can be critical and selective of the services it chooses, based on its own budget and requirements. In turn, this means they can benefit from competition, as vendors can battle to deliver more innovative, cost-effective services.

See full post

Fake Virtualization

Fake Virtualization

Pravin Mirchandani, CMO, OneAccess (an EKINOPS brand), dissects a poignant phrase coined at this week’s SDN NFV World Congress.

On day one of this year’s SDN NFV World Congress in The Hague, I was intrigued by a phrase used in the keynote: ‘fake virtualization’.

The speaker, Jehanne Savi, Executive Leader, All-IP & On-Demand Networks programs at Orange, used it to describe to describe the status of the industry. It was a powerful accusation but, aside from being tuned in to the zeitgeist of our times, what did she really mean?

Did she mean vendors were trumpeting false claims about their solutions? Or was she suggesting that they were failing to disaggregate network functions properly? Perhaps she was pointing to a failure in software abstraction, or a lack of software-defined control or, even worse, a recourse to proprietary APIs or protocols. The longer I thought about it, the more interpretations sprang to mind.

Her true meaning became clear towards the end of her keynote, when she used the words ‘vertical solutions’ as a direct translation for her neologism. Only then did the links between her various arguments appear. This wasn’t a single instance of fake news or testimony, but rather a broader lamentation on the lock-in nature of mainstream virtualization solutions; on the industry’s failure to enable an open, multi-vendor landscape that would give operators the choice and flexibility they sought to build new competitive solutions.

See full post

Fast Caterpillars: Solving the Complexity Crisis in Carrier Automation

blog_fast_caterpillar

A new focus on simplicity and standardization holds the key, says Pravin Mirchandani, CMO, OneAccess.

At the recent Zero-Touch and Carrier Automation Congress in Madrid, I was reminded of a quote from George Westerman of MIT:

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar.”

One of the most debated topics at the show was how to deal with issues of complexity when enabling automation in operators’ networks. The current processes designed to manage these networks have been designed to make human intervention and management relatively efficient. In the new world of automation, however, even minimal human intervention is problematic. Despite the industry’s best efforts, translating complex human decision-making into a computer-readable, algorithm-friendly process remains a real challenge.

Redesigning around simplicity

Indeed multiple speakers at the show agreed that trying to automate a complex set of processes was actually the wrong approach, and that a fresh take was needed. Starting again, to define new, simpler processes which focus on enabling zero-touch provisioning and automation, offers a more logical and achievable route forward.

See full post

When the Rubber (Finally) Hits the Road: 2018 Predictions

9H_final

Looking ahead to 2018, Pravin Mirchandani, CMO, OneAccess Networks, anticipates a year of challenge and opportunity for operators.

Europe Starts to Get Serious about Virtualization

After a long period of experimenting with virtualization technology in their labs, European Telcos (or at least some of them) will get serious about introducing virtualized services for their enterprise customers. This is clearly apparent at Deutsche Telecom, in particular, but also at Orange and BT. I’d still hesitate to predict that we will see many actual service launches in Europe but nonetheless decisions will be taken, budgets will be committed and the new product introduction (NPI) work for virtualized services will begin.

The Cost of White-Box CPEs will be Driven Down but Not by Price Reduction

It’s clear that all the big Telcos are convinced about the benefits of an on-premise white-box strategy and while in 2017 they debated about how to move from grey-boxes, principally from Cisco and Juniper, right now they have a different problem: cost, particularly for the appliance they need for the volume part of the enterprise market, commonly known as the ‘small uCPE’.

Yet if half the cost of a white-box derives from a monopoly vendor – Intel – then, in the absence of competition (hint: think ARM), the only way to reduce costs will be to moderate demands on it. This will come from two directions. The business managers at the Telcos will insist on a smaller set of VNF requirements to reduce the number of cores and memory required (the two key drivers of costs for a white-box appliance) and the VNF vendors will gradually reduce their resource footprint in response to the Telcos’ demands.

The Operators Will Realise that SD-WAN is Actually a Marketing Problem

SD-WAN puts choice in the hands of enterprises and does so at reduced cost with automation removing complexity, a winning combination that is taking business away from many Telcos. So far, the Telcos have looked at this principally as a technology problem: how to build self-select portals to introduce choice for their customers, how to automate their back-end processes and how to co-opt SD-WAN technology without vendor lock-in.

See full post

Why the time to move to NETCONF/YANG is now

time_to_move_netconf

With backing from major NFV/SDN projects and industry organizations, NETCONF, the client/server protocol designed to configure network devices more clearly and effectively, will soon become ubiquitous. Operators who migrate to NETCONF can both future-proof their operations for NFV and can also reap some of short-term benefits of automation, today.

Why NETCONF/YANG?

By addressing the shortcomings of existing network configuration protocols like Simple Network Management Protocol (SNMP) and Command Line Interface (CLI), NETCONF was developed to enable more efficient and effective network management. SNMP has long been cast off by operators and CSPs as hopelessly complicated and difficult to decipher. CLI may be readable by a network engineer, but it is prone to human error and can lead to vendor lock-in, since propriertary implementations often mean that only one vendor’s element management system can manage their network elements.

NETCONF, on the other hand, is designed specifically with programmability in mind, making it perfect for an automated, software-based environment. It enables a range of functions to be delivered automatically in the network, while maintaining flexibility and vendor independence (by removing the network’s dependence on device-specific CLI scripts). NETCONF also offers network management that is not only human readable, but also supports operations like transaction-based provisioning, querying, editing and deletion of configuration data.

YANG is the telecom-specific modelling language that makes NETCONF useful, by describing device configuration and state information that can be transported via the NETCONF protocol. The configuration is plain text and human-readable, plus it’s easy to copy and paste and compare between devices and services. Together, NETCONF and YANG can deliver a thus far elusive mix of predictability and automation.

NFV needs NETCONF

What’s even more powerful is that NETCONF and YANG combined offer the flexibility needed to manage both virtual and physical devices. This means that operators can get going with NETCONF now, before they start ripping out old devices and replacing them with white boxes. This is a necessary investment in the future; as we look forward to a virtualized networking environment, where network functions are spun up and changed continuously, the high level of automation enabled by NETCONF/YANG is not just preferable, it is essential.

See full post

Dissonance in the World of Virtualization

dissonance-in-the-world-of-virtualization

A collection of thoughts has been spinning in my head over the last few weeks based on various customer visits, presentations and panel debates I have seen or participated in recently and they have collectively formed into a common notion, that of dissonance. These reflections have come about as a result of number of unresolved issues that are either being ignored or are falling between the cracks as we all grapple with the complexities of introducing virtualization.

I’ll start with MANO. Many of us know that there is a competition between Open Source MANO sponsored principally by Telefonica’s I+D entity (its research and development arm) and ONAP (Open Network Automation Platform), which combines the outputs of AT&T’s homegrown MANO system called ECOMP and Open-O (Open Orchestrator Project) into a common open-source project. Setting aside the architectural differences between these two major MANO initiatives and their respective levels of industry take-up, what’s increasingly obvious is that both are horrendously complex and quite frankly beyond the means of most operators to implement. Open-source only saves the cost of acquisition, not of integration and deployment. That’s why many operators, both large and small, are sitting on the sidelines waiting for a viable solution to present itself. This might quite possibly be in the form of MANO-as-a-service, which is already attracting the attention of business managers at professional services outfits and even venture-capital funding, which reminds me that Sun Tzu said in the Art of War: “In the midst of chaos, there is also opportunity”.

Another dissonance that strikes me is that between the CTO offices in the vanguard of introducing virtualization and their own business managers. It’s not just that virtualization is proving horribly expensive to introduce and therefore difficult to convincingly flesh out an ROI spreadsheet. Rather the technical people have become so absorbed by the scope and scale of the technical challenges in front of them that they have collectively lost sight of the end goal: how to market virtualized network services and deliver benefits to their end-customers. A recent big theme at conferences has been the challenges involved in on-boarding new VNFs, over a 100 in some cases. My questions though are: who needs all this choice; how can customers without big IT budgets select which are the right ones for them; what is the benefit for these end-users as opposed to what they are doing today (see my recent blog entitled: ‘Hello from the Other Side: The Customer Benefits of NFV’); and indeed what’s in it for the operators – how are they going to make money when it’s not at all clear which VNFs they can effectively sell to the volume part of the enterprise market as managed services, i.e. where they make money?

There is also increasing dissonance on the vendor side – and recently several vendors have openly voiced their frustrations on this point - virtualizing products requires investment and until NFV starts moving out of the labs into volume service deployments we all are investing money based on the hope of generating payback at some uncertain point in the future.

The other huge dissonance is that all the investment in next-generation virtualization efforts has eaten most of the IT resources for introducing new services and platforms for the operators mainstream existing business. The number of delayed service launches atoperators due to ‘IT budget issues’ or ‘lack of available personnel’ is now an industry-wide phenomenon and delaying cost reduction and business generation service initiatives in the market. This is ironic as virtualization is meant to accelerate service delivery, not stall it.

See full post

Hello from the Other Side: The Customer Benefits of NFV

hello_from_other_side

Operators should start their NFV charm offensive now says Pravin Mirchandani, CMO, EKINOPS.

Surprisingly few of today’s conversations about NFV address how a virtualized infrastructure will benefit end-user businesses. If operators and CSPs want to bring their customers across without resistance, however, they must also consider how to position the benefits of NFV, particularly since some may view its new technologies as an unsolicited risk.

Flexible provisioning is the big persuader. Most businesses experience peak traffic at infrequent and predictable times. Despite only needing this level of service occasionally, however, they have had no option but to sign contracts that deliver this maximum capacity all the time. Operators haven’t had the flexibility to deliver anything else. Virtualization can change all of this.

This is big news because firms of all shapes and sizes are caught in this trap. Pizza companies do 90% of their business on Friday and Saturday nights, yet provision for these peaks seven days a week. This is a huge overhead, particularly for a national networked outfit like Domino’s. Other businesses, particularly retail e-commerce sites, are governed by seasonality. The spikes in online sales triggered by Black Friday and the January sales are well documented. Hotels beef up their network capacity for four intensive months of high occupancy but, under today’s model, must provision equally for the quieter eight.

Virtualization can address this need. A software-defined infrastructure will give customers access to a portal through which they can self-select the services they need and specify when they need them. The programmatic benefits of virtualization enable operators to spin them up automatically. Compare that to the weeks of notice that an operator currently needs to install a new line and the end-user benefits come into stark focus.

See full post

Acceleration techniques for white-box CPEs

acceleration-techniques-for-white-box-cpe

his blog will provide a quick introduction and comparison of some of the available acceleration technologies common on white-box CPE, sometimes also referred to as “universal CPE” or uCPE.

Classical premises equipment has traditionally relied on specialized network processors to deliver network processing performance. Standard x86 hardware however, which was originally designed for more general purpose compute tasks, especially when used together with a “plain vanilla” Linux implementation, will result in disappointing performance levels for data communication purposes unless expensive x86 CPUs are used. To address this concern, a number of software and hardware acceleration techniques have been introduced to meet the performance requirements imposed on today’s CPEs.

The processing context for white-box CPE is an environment that provides a small virtualized infrastructure at the customer premises, where multiple VMs (Virtual Machines) that host VNFs (Virtual Network Functions) are created and hosted in a Linux environment, Service chaining is established between the different VMs, resulting in a final customer service which can be configured and adapted through a customer portal. In this setup VNFs and VMs need to communicate either to the outside world through a NIC (Network Interface Card) or to another VNF for service chaining.

DPDK (Data Plane Development Kit)
In the case of white-box CPEs, DPDK provides a framework and set of techniques to accelerate data packet processing and circumvent the bottlenecks encountered in standard Linux processing. DPDK is implemented in software and basically bypasses the Linux kernel and network stack to establish a high-speed data path for rapid packet processing. Its great advantage is to produce significant performance improvements without hardware modifications. Although DPDK was originally developed for Intel-based processor environments, it is now also available on other processors such as ARM.

AES-NI (Advanced Encryption Standard New Instructions)

This is an extension to the x86 instruction set for Intel processors to accelerate the speed of encrypting and decrypting data packets using the AES standard. Without this instruction set, the encryption and decryption process would take a lot more time since it is a very compute-intensive task. The encryption is done at the data plane level and is used to secure data communications over Wide Area Networks.

See full post

ROI for white-box CPEs: a question of segmentation

ROI

Operators commonly expect that moving to SDN and NFV will enable cost reductions, through efficiency gains in network management and device consolidation. As they get closer to deployment, however, commercial reality is telling a different story, depending on the market segment being addressed.

One area where the ROI for white-box CPEs is easy to justify is appliance consolidation. If you can consolidate a number of proprietary appliances into a single white-box CPE then the CAPEX savings are clear. Chaining, for instance, a vRouter, a WAN optimizer and a next-generation firewall into one single x86 appliance (i.e. a white-box CPE) delivers immediately identifiable cost savings: one appliance instead of three is basically the formula and this is a commonly targeted combination. Combine this with the prospect of increased wallet share from large enterprises, which often run their networks themselves in do-it-yourself mode, and the large enterprise segment looks increasingly attractive for operators.

Let’s be clear, though: this is just a large enterprise play. SMBs and multi-site deployments for government or highly distributed organizations have no need for WAN optimization and little need for a next-generation firewall; the on-board firewall that comes with their router together with a PC-based anti-virus subscription and email antispam service are usually sufficient. As a result, anyone working on building business cases for white-box CPEs for the volume part of the market will attest that ROI a tough nut to crack.

The draw for this market segment is the potential to increase ARPU by making it easier and more flexible to use additional services through automated service delivery via virtualization.

In terms of hardware CAPEX, the cost of white-box CPE deployment outstrips that of traditional CPEs. For the large enterprise segment which often deploys multiple appliances, this cost increase is compensated by reducing the number of appliances. For other market segments, where a single CPE is more typically deployed, savings need to come from OPEX reductions or TCO savings. The latter, however, is notoriously difficult to calculate and is usually irrelevant in a context where staff reductions are difficult to achieve, particularly in a period of technology transition.

See full post

Fifty Shades of NFV?

fifty_shades_of_nfv

In the racy world of CPE architecture, what virtualization-hungry service providers say they want isn’t always what they need, says Pravin Mirchandani, CMO, EKINOPS.

Alright, perhaps ‘racy’ is going a bit far, but as the virtualization industry moves out of ‘does it work’ and into ‘let’s make it happen’, pulses are certainly starting to quicken. Not least because service providers are having to make tough calls about how to architect their management and orchestration (MANO). Many of these decisions revolve around the deployment of virtualized network functions (VNFs), via some form of customer premises equipment (CPE).

Several ‘shades’ are emerging, each with their advantages and drawbacks.

The ‘NETCONF-enabled CPE’ model emulates what we have today: a fixed number of physical network functions (note: not virtual) are embedded into a traditional L3 multi-service access router. The key difference here is that the router, as its name suggests, supports the NETCONF management protocol and can, as result, be managed in a virtualized environment. In truth, this is a pretty rudimentary form of virtualization; the router can be managed by a next-generation OSS with NETCONF and its embedded physical functions can be turned on and off remotely, but that’s about it. The device is not reprogrammable, nor can its network functions be removed or replaced with alternatives. The market for this deployment model lies in two use-cases: Firstly, as a bridging solution enabling service providers to co-operate traditional and virtualized network services simultaneously, facilitating migration. Secondly, given that many of today’s VNFs are heavy and need considerable amounts of memory and processing resources in order to operate, the more flexible white-box alternatives are costly in comparison. Specialist vendors like OneAccess have been developing dedicated CPE appliances (with embedded PNFs) for years, where compact and efficient code has always been a design goal in order to keep appliance costs under control. For more conservative operators that are keen to get ‘in the game’, the proven reliability and comparative cost efficiency of this model can offset its relatively limited flexibility. Rome wasn’t built in a day and some operators will prefer to nail the centralized management and orchestration piece before investing heavily in pure-play virtualization appliances for the network’s edge.

A purer approach is to invest in a ‘thick branch CPE’ or, in other words, an x86-based white-box solution running Linux, onto which VNF packages can be either pre-loaded and, in the future, removed and replaced or even selected by customers via, say, a web portal. This approach delivers far greater flexibility and is truer to the original promise of NFV, in which the network’s functions and components can be dismantled and recomposed in order to adjust a service offer. The snag, however is that white-box CPEs come at a cost. More memory and more processing power mean more cash. That’s why the race is on to develop compact VNFs, so they can minimize processing requirements and, as a result, enable a limited spec white-box to do more, with less. Again, unsurprisingly, those ahead of the curve are VNF vendors that have the experience of wringing every last drop of performance out of compact and cost-efficient appliances, purpose-designed for operators and service providers.

See full post

Thick & Thin: A Taxonomy of CPEs

taxonomy_cpe

In presentations at virtualization conferences, and in our discussions with operators and service providers, there remains a lot of confusion surrounding the terms ‘thick’ and ‘thin’ as they relate to customer premises equipment (CPE). This is because the terms are used interchangeably, to describe different market segments, the density of network functions as well as the nature of the CPE itself.

The roots of ‘thick’ and ‘thin’ comes from the term ‘thin client’; a popular reference to a lightweight computer or terminal that depends heavily on a server or server farm to deliver data processing and application support. This contrasts with the PC, which performs these roles independently, and was somewhat disparagingly referred to as a ‘fat client’, or, more neutrally, as a ‘thick client’.

This heritage is important as we look to provide a taxonomy of CPEs, which will hopefully aid our understanding of their respective roles in the delivery of virtualized network services.

Generically, CPE or ‘customer premises equipment’ refers to the equipment provided by a service provider that is then installed with its customers. Historically, CPE referred mainly to the supply of telephony equipment, but today the term encompasses a whole range of operator supplied equipment including routers, switches, voice gateways, set-top boxes as well as home networking adapters.

Thick CPE refers typically to a router or switch that provides network functions at the customer premises. There are now three main types:

See full post

The White-Box CPE: Separating Myths from Reality

the-white-box-cpe-separating-myths-from-reality

In the world of customer premises equipment (CPE), the white-box is a new idea. It should then come as no surprise that misconceptions among operators and CSPs about what constitutes the white-box CPE are common. Here are four of the most prevalent.

Myth #1: The white-box CPE is just commodity hardware.

No. The software-hosting platform is the most important part! Google and Apple have succeeded with smart phones because they created killer platforms, carefully designed to provide a secure environment ready for third party developers to exploit, which enabled their apps base to proliferate. The white-box CPE is no different. Don’t get me wrong, it is still all about using commodity hardware, but this is just the tip of the iceberg. It’s enabling potential stems from the software platform that controls the hardware, not from the hardware itself.

The same goes for the network functions running on the white-box CPE. They need to be installed, chained, activated, run, provisioned, monitored, charged and, of course, done so in a secure manner with maximum efficiency and predictability.
We’re not talking about any old software here, either. This is highly specialist, instance-specific software but unlike smartphones the functions are often dependent on each other. There are lots of open-source components that can carry out each of these tasks individually, but they also need to be integrated, managed with homogeneous, standardized APIs and provided with pre-packaged, documented use cases to facilitate their integration. We often see service providers adopting a DIY approach. But this only lasts until they realize the extent of work and the depth of know-how required to assemble all these pieces together. Building a demonstrator is one thing; warrantying the operational lifecycle over the lifetime of a service is another.

Myth #2: White-box CPEs are for everyone.

The whole idea of white-box CPEs is to foster the ability to take VNFs from multiple vendors and have the freedom to mix-and-match them according to the service provider’s desired functions, price and brand.
This is all good ‘on paper’. The reality, however, is different. Just like when specifying additional options on a car, the bill soon adds up. In fact, imagine being able to walk into a VW dealer and demand options from BMW, Mercedes, Honda and Ford. Welcome to the world of the white-box CPE!

Large enterprise can afford it because, right now, they are buying single-use network appliances and stacking them up in the customer’s premises. The white-box CPE’s promise of appliance consolidation is so great that the economics allow it to be expensive.

See full post

Do CLI Experts Need to Worry about Their Jobs?

do-cli-experts-need-to-worry-about-their-jobs

Our customers tell us that NETCONF is fast becoming the preferred way to provision virtualized networking devices. Tail-f has been the main advocate of this protocol, with a pitch that goes roughly as follows: “NETCONF provides the toolset to configure an end-to-end service with a single network-wide transaction. OSS programmers spend less time in handling error cases –as the role of transactions is precisely to ensure success and rollback in case of errors. You do not need to pay high-salary CLI experts to handle such rollbacks manually, when the OSS has failed to do it in a proper way. And you can now automate service creation at an unprecedented level.”

The more you automate the less people you need and with the transactional capabilities of NETCONF you do not need them to recover the disaster of accidental automation glitches. That has got to be rather scary for the cohorts of CCIE-certified engineers and other CLI experts, doesn’t it? Does it mean their expertise will become redundant? Do they need to find a new mission in life?

I recently met some CLI engineers. They see the change coming but as they are buried by the demands of their day-to-day activities, they have not yet had the time to study and experiment with NETCONF. So, this protocol along with YANG data modelling remains very abstract, if not confusing to them. Of course they understand quite clearly that NETCONF is about a programmatic approach to the process of service creation. Network engineers thus understand they must acquire new skills in programming but it is certainly not their comfort zone today.

In many cases, an OSS will not manipulate YANG models or NETCONF directly. The likely way to program network services is to use tools that generate an interface code from the relevant YANG models and programmers will use that API to create the services. For IT engineers, service creation is not much more than mapping a Service Abstract Layer (SAL) data model to objects on a set of networking functions or devices, nothing more.

But that is not the starting point for network engineers. Their initial steps are still the same (as before NETCONF): create a network design, prepare a reference setup, elaborate configuration templates, write troubleshooting guides, etc., with the notable difference being that such templates must be written in NETCONF XML. With OneAccess products, this is a fairly natural step: first create the reference configuration using the familiar Command-Line Interface, then use some commands to export it as XML or a set of xpath statements. Using this process, CLI can be mapped to NETCONF pretty intuitively. In other words, an extensive CLI knowledge is still a valuable asset for engineers. Working with NETCONF is then an easy next step and defining XML templates is after all not so difficult.

See full post

Ultra-compact VNFs are the key to cost-effective NFV

ultra-compact-vnfs-are-the-key-to-cost-effective-nf

OneAccess’ CMO, Pravin Mirchandani, argues that only the most efficient VNFs will pass the scrutiny of the operator community; the risks and costs of accepting anything different are too high.

The arrival of the software-defined networking era has been enthusiastically welcomed by all as it promises both infrastructure and services that can flex and grow in line with operators’ changing requirements. The question how can we transition to an SDN/NFV based infrastructure as quickly possible? is occupying industry minds, service providers and vendors alike. Anyone with lingering doubts need only consider the recent A&M moves by the industry big guns, Juniper and Cisco, who are feverishly trying to reinvent themselves as born again software businesses.

Virtualized network functions (VNFs), delivered over the NFV infrastructure (NFVI), promise to minimize the operator investments needed to customize future services in line with their operational needs. But, at the moment, the jury is still out on what the killer VNFs are going to be. This question raises new concerns: what VNFs should operators plan for when specifying their white box? How will demand for resources play out over the business cycle? Here carriers are facing some tough decisions; ones that may ultimately determine their ability to compete in a crowded sector. An over-specified white box will waste huge amounts of money and already NFV migration is proving much more costly than first thought. Far worse though is the prospect of under-specification, which would result in a virtualized environment that simply isn’t fit for purpose.

The dilemma for the operators can’t be taken lightly. If they deploy basic bare metal units, the risk is lost revenue when customers, who cannot upgrade when needed, move to an alternative supplier. Most likely, a middle ground will be reached, and attention will refocus on the familiar question of how to get more for less. Those that thought that this question might evaporate as network goes software-centric should prepare for disappointment. Operators will be exerting great pressure on VNF developers to do just this, by creating ultra-compact and efficient software functions, not least, so their choice of white-box stands the best chance of coping with as-yet-unknown future demands.

There are many vendors aiming to position themselves in this space which, it seems, is where the long-term revenue opportunity exists. But if they want to deploy a full catalog of VNFS including functions such as WAN optimization, vCPE, VPN and encryption, for example, carriers need to be conscious that many developers hail from an enterprise background, in which their solutions have operated on dedicated appliances drawing on uncontested computing power. VNF development is a different ballgame altogether - so it will be interesting to see how these modules perform when they are scaled down to share the resources of a single white box.

See full post

Virtualization means changing the approach to proof-of-concept projects for both Carriers and Vendors

virtualization-means-changing-the-approach-to-proof-of-concept-projects-for-both-carriers-and-vendor

OneAccess CTO, Antoine Clerget argues that vendors need to radically re-think their approach to PoC projects as carriers begin to plan their transition to software-defined network functions.

Until quite recently, when Telcos wanted to evaluate the different vendor’s technologies needed to build out new service platforms, the process was relatively straightforward. Typically, it meant plugging a box into the lab network and running the relevant functional and performance tests and then, assuming that the results were acceptable, handing things over to the commercial and legal teams to thrash out the supply contracts. Well, perhaps a bit more complicated than that but nevertheless a far simpler proposition than the one today’s engineers face when demonstrating or assessing NFV products.

Unlike when the CPE router came as a discrete device with a range of physical components on board, today individual network functions are de-composed into discrete elements and then stitched together in the virtualization infrastructure. In the new NFV environment, resources are to some extent shared with other (potentially third party) VNFs and the underlying infrastructure and VNFs run on some hardware unknown to the VNF vendor. As a consequence, moving from independent elements to a complete solution, even if it sits in a single hardware equipment, requires new types of integration skills. This means that a new and different approach is needed with a key focus being on integration, particularly on how management, functional and infrastructure elements work together in an environment where there are still a lot of unknowns.

After a remarkably short period of technical debate between the various actors in the network industry, we are now seeing a definite upswing in interest in testing the claims of SDN and NFV technologies in terms of genuine PoCs. This is especially true among those Telcos that are looking to future-proof their network infrastructures to achieve their major goals of flexibility for market differentiation and programmability for reducing costs.

As an early champion of the move to a more white-box /VNF approach to CPE architecture we see this as a natural progression, building on our existing multi-functional router platforms, which already include an extensive range of software modules for specific network functions and security. However, at the same time this has meant a total re-think on what is needed for a PoC project to be successful. With more emphasis on the need to answer questions about the interoperability of the technology in this new and highly dynamic virtualized environment, it means that our engineering teams need to take a much more direct, hands-on involvement in the process than was previously the case.

See full post

Tomorrow’s CPE: the Wimbledon of network virtualization?

tomorrow-s-cpe-the-wimbledon-of-network-virtualization

Despite the industry’s charge toward network virtualization, the need for customers to connect their routers to non-Ethernet legacy connections is not going away. Couple this with the fact that a bunch of emerging network functions require an on-prem appliance, and the virtualized ‘CPE of the future’ starts to feel, well, really rather physical. So, is the CPE the Wimbledon of the network; ever-present, resistant to change, but perhaps also capable of surprising us all with its innovations?

Take Wimbledon’s white dress code, for example; a deeply entrenched tradition that has become a defining characteristic of the tournament. But in recent years, however, the dress discipline has been partially relaxed. Today, the tournament accommodates at least some expressions of color. Similarly, the majority of CPE appliances that today deliver network connectivity and voice gateway functions are specialized devices, and will stoically remain so for the next few years. It’s just too expensive to do otherwise, until fiber with G.fast as a short-haul copper Ethernet extension become ubiquitous and all voice terminals are IP-based. Out of necessity, therefore, incumbent local exchange carriers (ILECs) will have little option but to support this CPE model. In other words, it looks like the traditionalists, both at the tennis and on the network, can rest easy. For now, at least.

But pressure to change is mounting. Competitive local exchange carriers (CLECs), together with alternative network operators, are more agile and, since they can target Ethernet-only network connections, can move more quickly to a vCPE approach. That said, some network functions will need to remain ‘on premise’, namely link management, service demarcation and service assurance. The network functions that can migrate to the virtualized center will do so over time. In our Wimbledon analogy, this equates to another tournament altogether, played on a far more contemporary surface than Wimbledon’s time-honoured grass. Competition indeed for the ‘historic home of tennis’.

The need for some functions to remain on premise means that the CPE will increasingly comprise hybrid devices – ones that support both traditional network functions and those located in a centralized and virtualized core. Incidentally, this won’t be just a single data center, but rather a set of distributed virtualized centers located with the network infrastructure (most likely at POPs) to mitigate traffic tromboning.

The huge IT challenge of accommodating virtualized delivery of services mean that the CPE will also need to become a multi-tongued device able to speak next-generation protocols – Netconf, Openflow – as well as traditional CLI, TR-069 and SNMP. It seems inevitably that that, after holding out for as long as they can, traditionalists at both Wimbledon and in the CPE, will be forced to accept some variations, but only within ‘proper’ limits of course!

See full post

NFV: The Current State of Play

SDN_NFV_virtualization_virtualizasation_lights_lumires_blue_bleu_tunnel

Act One of the NFV show has finished, leaving operators to sift through the hype, piece together what they have learned and knuckle down to the serious business of design, feasibility and business case development. Pravin Mirchandani, CMO and NFV Evangelist at OneAccess, recounts some sentiments and soundbites from the NFV circuit.

1. Whoa! NFV is expensive!

Operators have now moved beyond best guess, back-of-the-envelope cost estimates. At least some measured CAPEX projections are in, and with them comes a grudging realization of quite how costly NFV is going to be. Why? Because operators must build x86 server farms right across their network in order to host their NFV infrastructure (NFVi); something which is going to mean a significant investment up front. What’s more, because virtualized traffic management requires the NFVi to be distributed (to provide appropriate location of network functions and traffic management right across the geographic reaches of the network), savings can’t be made by consolidating these farms on a single location. Sitting on top of the compute infrastructure, there is of course the software infrastructure and network functions, which also needs to be funded. What this has resulted in is a marked shift in focus to citing OPEX savings, service velocity and service agility as the main justifications for NFV, away from CAPEX reductions

2. We need SLAs, not just I/O

To date, when considering performance, the industry’s focus has been on input/output (I/O) but, given that virtualized network functions (VNFs) are sold as services to paying customers, I/O is only half of the story. To be commercial contenders, VNFs need to be associated with performance guarantees that are enshrined in service level agreements (SLAs). Further, an assessment of compute and memory footprint for each network function is required in order to assess deployment scalability. This is no great challenge where dedicated hardware is concerned, but when the network function is software-based (as with a VNF), located on a shared computing platform, the factors influencing performance are dependent on a range of resource-related variables, making guarantees harder to establish. This area needs serious attention before the NFV can move into a fully commercial phase with the major operators.

3. Pricing is all over the map

Many operators won’t open the door to a VNF vendor without full disclosure of their pricing model, especially as a couple of leading vendors have announced pricing levels that are considered by the operators as unreasonable. Pricing models also remain fragmented between vendors, making it difficult for operators to compare like for like. The software element, in particular, is a minefield. Unsurprisingly, some vendors are applying NFV pricing in accordance with the anticipated impact that NFV will have on their future hardware revenues. This is distorting the market at a very early stage, inhibiting assessment by the operator community.

4. VNF trials are defined by what’s available, not by what’s needed

A lamentable result of the current NFV market is that operators’ choices of VNF trials are being defined by availability, not strategic objectives. vCPE and virtual firewall functions have both been around for a while, but are these two functions the only ones that the operators want to do? Perhaps it’s too early to say. In any case, the real focus of today’s VNF trials is to successfully build the NFVi and nail down the orchestration and management pieces. In this sense, it doesn’t yet matter what the actual VNF is. Over time, this will change. Operators will begin to assess which VNFs are the most important for their business, and which will save them the most money? Ideally, operators should be bringing this thinking forward; if they settle on VNFs that differ from those they have trialled, it will be a struggle to understand the commercial, technical and operational implications.

See full post

NFV: Time to Get Back to (Virtual) Reality, for the Sake of the Operators

SDN_NFV

Pravin Mirchandani, CMO, OneAccess, calls for 'plausible NFV' amid a world of ill-judged proof-of-concepts.

NFV has been voraciously hyped and with good reason; there is much to get excited about. The potential benefits to operators and communication service providers (CSPs) of enabling a virtualized and service oriented network environment are vast: increased network flexibility, additional security, reductions in network OPEX/CAPEX, dynamic capacity adaptation according to network needs and, perhaps most crucial of all, reduced time to market for new, revenue generating network services that can combat declining ARPUs. NFV really could be the silver bullet that operators and CSPs have been looking for.

But there’s a storm brewing for 2015. So excited has the networking industry become that its NFV gaze has focused almost universally on the end-game: an idealized world in which new services are ‘turned up’ as part of a complete virtualized service chain. Perilously little has been said about how operators will migrate to utopia from the battlegrounds of today.

To date, the central migration message coming from the big five networking vendors has been: ‘Trust us. We’ll get you there.’ Needless to say operators, whose collective future may be determined by their success with NFV, are far from comforted by such assurances. Many have endured vendor lock-in for decades and, as a result, are rightly viewing this first wave of proprietary NFV proof-of-concepts (POCs) with a healthy dose of scepticism. Given a viable and open alternative, NFV could be their chance to break free.

It’s not only vendor lock-in that operators should fear. In their haste to establish NFV dominance, many vendors have NFV-ized their existing lines of routers and switches by installing x86 cards and are now conducting operator POCs via this generic computing environment. This is sledgehammer NFV in action; it may prove that the theory behind NFV is possible, but it is seriously lacking in plausibility when any kind of scaled migration path is considered. Cash-strapped operators are highly unlikely to stomach the significant price premium required to install x86 cards across their entire CPE infrastructure. Moreover, x86 does not always deliver the optimized performance needed for the volume packet handling and SLA requirements for today’s network services, and in the operators’ last-mile network, there are far too many access link combinations required to enable the physical hardware to be done away with any time soon. ADSL, VDSL, S.HDSL, among others, plus cellular for radio access (frequently used for backup), together with SFP ports to support different fiber speeds and optical standards, are not readily available in an x86 platform, and could only be made so at a prohibitive cost.

See full post

Latest News

  • EKINOPS completes the acquisition of OTN technology from Padtec

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of telecommunications solutions for telecom operators, today completes the acquisition of the OTN-Switch (Optical Transport Network) platform developed by Padtec, an optical communications system manufacturer based in Brazil.

     
  • A record 2nd quarter with sequential growth of 17%. H1 2019: revenue of €45 million and expected improvement in EBITDA margin

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of telecommunications solutions for telecom operators, has published its revenue for the second quarter of 2019.

     
  • EKINOPS Launches Channel Partner Program in EMEA and APAC

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of optical transport equipment and router solutions, today announces the launch of the EKINOPS Channel Partner Program (ECPP). The program has been designed to support value-added resellers (VARs) and system integrators to differentiate in the market by providing them with the opportunity to build, sell and deliver solutions tailored to their customer needs, while still benefitting from the Ekinops’ extensive knowledge, resources and expertise.

     

EKINOPS Worldwide

EKINOPS EMEA & APAC
Telephone +33 (0)1 77 71 12 00

EKINOPS AMERICAS
Telephone +1 (571) 385-4103

 

E-MAIL ALERTS

Receive automatically EKINOPS information in your inbox!