Menu

EKINOPS en français

WELCOME TO

EKINOPS BLOG

How SD-WAN is Forcing Service Providers to Re-invent Themselves (…as NFV did with vendors)

stone-henge-1566686_1920

Interesting similarities can be observed between SD-WAN and NFV. NFV aimed at ending the reign of monolithic solutions, with software and hardware decoupling, but also with the use of open APIs that facilitate the ability to pick, choose and replace vendors as necessary. The hope was that increased competition and commodity hardware would drive costs down. This is however not a particularly shiny prospect for vendors.

If you think about it, SD-WAN is also about breaking monoliths: service provider offers. From this perspective, the managed service offer is split into discrete components: connectivity, hardware, WAN/LAN devices, VPN/security and operations. Enterprise customers can theoretically source each item independently and benefit from choice and competition. As a result, a large community of SD-WAN advocates aggressively target service providers, accusing them of offering less and charging more; in other words ripping off enterprise customers. This is equally not a shiny prospect for service providers.

The right answer for CSPs is not necessarily to fully embrace the current offerings of the main SD-WAN players. It may be part of the answer, but not the full one. As with vendors for NFV, this turmoil is forcing service providers to reconsider the value and strategy for each individual component of their offer (connectivity, hardware, operations, etc.

Let’s start with connectivity and VPN. Many SD-WAN vendors build their business case on saving MPLS costs. If you are a CSP and can serve your customer with MPLS easily, this is just a bargaining game. Why SD-WAN then? Just call your service provider sales rep! Many commentators now admit that MPLS is not dead but SD-WAN has highlighted how un-ideal it is: if you need to build a global VPN with branches in Mexico, South-East Asia, etc., those MPLS links become awfully expensive and slow to deploy. This is where it makes sense for CSPs to adopt Over-The-Top VPN technologies as proposed by the mainstream SD-WAN solutions. In other words, service providers do not need full-blown SD-WAN technologies to remain competitive so long as the customer demand is limited to “same as before, but lower cost”.

Of course, there is more to SD-WAN than just building a network overlay: such as being able to easily enforce application policies, monitor network and application performance. In my opinion, this should be viewed as another layer of services that can be offered at a premium cost. Historically, service providers have been extremely successful in outsourcing networks for enterprise customers, especially in Europe. The key ingredients were: a one-stop shopping experience and being a price leader for this outsourcing. The contention here is that they can strike back against SD-WAN DIY and System Integrators. Being a price leader implies they need entry-level Over-The-Top offers, but also a rich set of options to upsell so that enterprises remain attracted to their main marketing asset: a one-stop shopping experience.

Tags:

Agility is the Goal and Lack of it the Handbrake

blog_agility

When looking at the virtualization landscape in its broader sense I can’t help seeing that the real problem is agility or rather lack of it.

Take a look at SDN, whose main premise is to separate control from the data plane and thereby achieve high agility for its users. Same thing for NFV: abstract the software from the hardware and then combine and deploy software functions at will for high agility. So the operators, at least the bigger ones who have the least agility, are investing megabucks to deliver on the twin promises of SDN and NFV. What’s frustrating for them is that they are doing so lamentably slowly and in a decidedly non-agile manner.

In a sense it’s not so surprising. In effect, the operators are trying to re-engineer 15 years of painstakingly built managed services processes in a very short period of time. And they are doing so using a combination of new operational and management techniques, new albeit commonly available hardware, re-packaged software and co-opting their existing set of waterfall processes and organisational silos. This is like pressing on the accelerator with the handbrake on. Some, e.g. Verizon and belatedly AT&T, are trying to loosen the rigid structure of their organisations to create a more agile decision-making process and a Devops culture where network specialists and IT folk combine to break down internal barriers and loosen their internal disk-brakes.

It’s when we take a look at the arrival of SD-WAN that we see the real impact of lack of agility. SD-WAN vendors position their wares to enterprises with self-select portals, centralised policy control and the ability to take advantage of cheaper Internet links to deal with all the Cloud-based, Internet services that are swamping their traditional operator-provided WAN connections. This puts control and thus agility in the hands of the enterprises and this is precisely what they want.

The response of the operators to SD-WAN however is telling. As opposed to SDN and NFV, where they are re-engineering their processes with open APIs to (slowly) deliver on the promise of virtualization, they have taken a very different tack with SD-WAN. For the most part, they have:

See full post

Keeping Satellite in the 5G Game

Keeping_Satellite_5G_Game

EKINOPS explores the intersection between virtualization, 5G and satellite communications, highlighting how a recent technology breakthrough will deliver additional value for satcom providers, CSPs and end-users alike.

5GPP, the EU body charged with establishing global consensus on the use of 5G, has been making strong progress in defining how the new connectivity standard can support a rapidly growing range of future use cases. Telecom operators, who will be required to make huge investments in replacing masts, antennas, base stations and other equipment to support 5G, are happy to wait for the body to publish its specifications. Satcom providers, however, are not. As things currently stand, satcom is not part of the 5G game and, as a result, its equipment and service providers risk being elbowed out of use-cases for which satellite is today’s de-facto choice.

From a commercial perspective, communication service providers (CSPs) know that their ability to support all types of network links gives them the flexibility to deliver bespoke solutions that maximise value for their customers. To this end, harmonising satellite with 5G makes a lot of sense, particularly where the technology can augment 5G services.

The range of situations that favour a satcom/5G solution is bigger than one might think, and extends beyond the infrequent, high bandwidth requirements of native satcom broadcasting, live sports stadium feeds and news broadcasting. Satellite also has a valuable role to play in areas where 5G networks have limited coverage, allowing 5G traffic backhauling to remote areas, for example, and complementing ‘slow’ terrestrial links thanks to multi-link technology and connections to mobile vehicles and planes.

Fortunately, this argument has already been recognised within 5GPP, and has given rise to SaT5G, an H2020 European Research project with the objective of designing next generation standards that make satcom 5G-friendly. The group is developing a cost-effective ‘plug and play’ satcom solution for 5G to enable operators and network vendors to accelerate 5G deployment in all geographies and, at the same time, create new and growing market opportunities for satcom industry stakeholders.

See full post

Why the time to move to NETCONF/YANG is now

time_to_move_netconf

With backing from major NFV/SDN projects and industry organizations, NETCONF, the client/server protocol designed to configure network devices more clearly and effectively, will soon become ubiquitous. Operators who migrate to NETCONF can both future-proof their operations for NFV and can also reap some of short-term benefits of automation, today.

Why NETCONF/YANG?

By addressing the shortcomings of existing network configuration protocols like Simple Network Management Protocol (SNMP) and Command Line Interface (CLI), NETCONF was developed to enable more efficient and effective network management. SNMP has long been cast off by operators and CSPs as hopelessly complicated and difficult to decipher. CLI may be readable by a network engineer, but it is prone to human error and can lead to vendor lock-in, since propriertary implementations often mean that only one vendor’s element management system can manage their network elements.

NETCONF, on the other hand, is designed specifically with programmability in mind, making it perfect for an automated, software-based environment. It enables a range of functions to be delivered automatically in the network, while maintaining flexibility and vendor independence (by removing the network’s dependence on device-specific CLI scripts). NETCONF also offers network management that is not only human readable, but also supports operations like transaction-based provisioning, querying, editing and deletion of configuration data.

YANG is the telecom-specific modelling language that makes NETCONF useful, by describing device configuration and state information that can be transported via the NETCONF protocol. The configuration is plain text and human-readable, plus it’s easy to copy and paste and compare between devices and services. Together, NETCONF and YANG can deliver a thus far elusive mix of predictability and automation.

NFV needs NETCONF

What’s even more powerful is that NETCONF and YANG combined offer the flexibility needed to manage both virtual and physical devices. This means that operators can get going with NETCONF now, before they start ripping out old devices and replacing them with white boxes. This is a necessary investment in the future; as we look forward to a virtualized networking environment, where network functions are spun up and changed continuously, the high level of automation enabled by NETCONF/YANG is not just preferable, it is essential.

See full post

Dissonance in the World of Virtualization

dissonance-in-the-world-of-virtualization

A collection of thoughts has been spinning in my head over the last few weeks based on various customer visits, presentations and panel debates I have seen or participated in recently and they have collectively formed into a common notion, that of dissonance. These reflections have come about as a result of number of unresolved issues that are either being ignored or are falling between the cracks as we all grapple with the complexities of introducing virtualization.

I’ll start with MANO. Many of us know that there is a competition between Open Source MANO sponsored principally by Telefonica’s I+D entity (its research and development arm) and ONAP (Open Network Automation Platform), which combines the outputs of AT&T’s homegrown MANO system called ECOMP and Open-O (Open Orchestrator Project) into a common open-source project. Setting aside the architectural differences between these two major MANO initiatives and their respective levels of industry take-up, what’s increasingly obvious is that both are horrendously complex and quite frankly beyond the means of most operators to implement. Open-source only saves the cost of acquisition, not of integration and deployment. That’s why many operators, both large and small, are sitting on the sidelines waiting for a viable solution to present itself. This might quite possibly be in the form of MANO-as-a-service, which is already attracting the attention of business managers at professional services outfits and even venture-capital funding, which reminds me that Sun Tzu said in the Art of War: “In the midst of chaos, there is also opportunity”.

Another dissonance that strikes me is that between the CTO offices in the vanguard of introducing virtualization and their own business managers. It’s not just that virtualization is proving horribly expensive to introduce and therefore difficult to convincingly flesh out an ROI spreadsheet. Rather the technical people have become so absorbed by the scope and scale of the technical challenges in front of them that they have collectively lost sight of the end goal: how to market virtualized network services and deliver benefits to their end-customers. A recent big theme at conferences has been the challenges involved in on-boarding new VNFs, over a 100 in some cases. My questions though are: who needs all this choice; how can customers without big IT budgets select which are the right ones for them; what is the benefit for these end-users as opposed to what they are doing today (see my recent blog entitled: ‘Hello from the Other Side: The Customer Benefits of NFV’); and indeed what’s in it for the operators – how are they going to make money when it’s not at all clear which VNFs they can effectively sell to the volume part of the enterprise market as managed services, i.e. where they make money?

There is also increasing dissonance on the vendor side – and recently several vendors have openly voiced their frustrations on this point - virtualizing products requires investment and until NFV starts moving out of the labs into volume service deployments we all are investing money based on the hope of generating payback at some uncertain point in the future.

The other huge dissonance is that all the investment in next-generation virtualization efforts has eaten most of the IT resources for introducing new services and platforms for the operators mainstream existing business. The number of delayed service launches atoperators due to ‘IT budget issues’ or ‘lack of available personnel’ is now an industry-wide phenomenon and delaying cost reduction and business generation service initiatives in the market. This is ironic as virtualization is meant to accelerate service delivery, not stall it.

See full post

Hello from the Other Side: The Customer Benefits of NFV

hello_from_other_side

Operators should start their NFV charm offensive now says Pravin Mirchandani, CMO, EKINOPS.

Surprisingly few of today’s conversations about NFV address how a virtualized infrastructure will benefit end-user businesses. If operators and CSPs want to bring their customers across without resistance, however, they must also consider how to position the benefits of NFV, particularly since some may view its new technologies as an unsolicited risk.

Flexible provisioning is the big persuader. Most businesses experience peak traffic at infrequent and predictable times. Despite only needing this level of service occasionally, however, they have had no option but to sign contracts that deliver this maximum capacity all the time. Operators haven’t had the flexibility to deliver anything else. Virtualization can change all of this.

This is big news because firms of all shapes and sizes are caught in this trap. Pizza companies do 90% of their business on Friday and Saturday nights, yet provision for these peaks seven days a week. This is a huge overhead, particularly for a national networked outfit like Domino’s. Other businesses, particularly retail e-commerce sites, are governed by seasonality. The spikes in online sales triggered by Black Friday and the January sales are well documented. Hotels beef up their network capacity for four intensive months of high occupancy but, under today’s model, must provision equally for the quieter eight.

Virtualization can address this need. A software-defined infrastructure will give customers access to a portal through which they can self-select the services they need and specify when they need them. The programmatic benefits of virtualization enable operators to spin them up automatically. Compare that to the weeks of notice that an operator currently needs to install a new line and the end-user benefits come into stark focus.

See full post

The Song of the Sirens: Five ways to spot hidden NFV vendor lock-in

the-song-of-the-sirens-five-ways-to-spot-hidden-nfv-vendor-lock-in

One of the big attractions of NFV is that it gives operators a chance to break free of single vendor contracts and establish greater control over the future development of their networks.

Genuinely ‘open NFV’ gives operators the ability to change tack according to technical and commercial developments. It enables them to shop around for best of breed solutions and blend a mixture of, say, migration-oriented hybrid solutions with white or grey-box CPEs and connect them all to their choice of orchestrator. It also dramatically improves their negotiating power.

Yet, despite appearances, few NFV vendors practice ‘genuinely open NFV’ and instead disguise how they intend to close and lock the front door once their customer has stepped inside.

There are five common traps that vendors set for operators as they entice them toward ‘open NFV’ contracts:

#1 Charging for third-party connections

See full post

US Telcos Move Ahead on NFV as Europe Watches: 2017 Predictions

virtu2017

While huge progress has been made this year toward the development of virtualized services, a variety of challenges remain before widespread service roll-out can commence. 2017 will be the year that the industry takes positive steps to overcome these barriers, says Pravin Mirchandani, CMO at OneAccess Networks, who shares his five key predictions for the year ahead.

#1 Europe will stay on the sidelines as USA and Japan push on

We will see virtualized services start to roll out, principally in the USA, and to a lesser extent in Japan through NTT. While everyone remains sure that the way forward is virtualization, the industry is still figuring out how to solve the multiple technical, commercial and organizational challenges posed by migration.

Operators will be closely watching AT&T and its Domain 2.0 program and keeping an eye on Verizon too, in a bid to learn lessons about how to implement NFV. Europe’s hard-pressed operators, in particular, will mostly stay parked in ‘watch and learn’ mode, continuing with RFx, proof of concepts and trials. In fact, we’re unlikely to see any virtualized services roll out across Europe in 2017. Compelling business cases are harder to assemble in the European continent and, until these are squared away, operators will prefer to observe how US and Japanese trail-blazers facilitate service migration and preserve their choices – both major factors driving current and near-term investment decisions.

#2 NFV’s ROI equation will need to be cracked

See full post

Pushing through Glass Ceilings at the SDN World Congress 2016

imageSDNWorldCongress_TheHague_2016

Live from the SDN World Congress Show in The Hague, Pravin Mirchandani, CMO, EKINOPS, reflects on the industry challenges steering the dialogue at this year’s conference.

In Gartner’s hype cycle, there is an inevitable time of disillusionment that follows the initial excitement of a new technology. At the SDN World Congress this feels different: although we have probably passed the peak of inflated expectations, there is less a trough of disillusionment, rather a set of major impediments that need to be cleared away in order to achieve the nirvana of SDN/NFV. Most actors can see what needs to be done and are steadfastly supporting the initial objectives but my impression is that breaking through to achieve the goals of network virtualization is like pushing through the famous glass ceiling. Though not created by prejudice, as in the traditional definition of glass ceiling, the barriers are real and there are many.

Glass Ceiling 1: Complexity One of the goals of software-defined networking is to reduce dependence on an army of network experts, who are difficult to recruit and retain, expensive to hire and prone to error. What’s clear is that what they do is indeed complex; and converting their expertise and processes into automated software processes and APIs is equally if not more complex, as there is a distinct lack of established practices and field-proven code to draw upon. Many of the speakers at SDN World Congress mentioned the issue of complexity and this was a constant theme in the corridor discussions. Laurent Herr, VP of OSS at Orange Business Services stated that Orange estimated it would take 20,000 man-days to convert their tens of IT systems to achieve virtualization.

Glass Ceiling 2: Culture Another common theme was the issue of culture. Telcos have been organised to deliver the ‘procure-design-integrate-deploy’ cycle for new services and have a well-established set of linear processes and organizational silos to achieve it. Introducing virtualized services however requires a DevOps culture based on agility, fast failing (anathema to the internal cultures of Telcos) and rapid assembly of multi-skilled teams (especially collaboration between network and IT experts) to deliver new outcomes, frequently, fast and reliably. Achieving a DevOps culture was one of the most frequently cited challenges by the Telco speakers at the Congress. Another common word they used was transformation.

Glass Ceiling 3: Lack of Expertise It’s difficult to estimate the number of engineers that really understand the principles and practices of virtualization but they probably number in the low hundreds across the globe. Given the ability of the vendors to pay better salaries, it’s a safe bet that the majority work for them rather than for the Telcos. Growing this number is difficult as it requires combining IT, programming and network skills. Creating collaborative teams helps but finding or training people to achieve mastery of the different skills is a challenge for the whole industry. This was more of a corridor conversation rather than openly cited by the speakers but it is a glass ceiling nevertheless.

See full post

Thick & Thin: A Taxonomy of CPEs

taxonomy_cpe

In presentations at virtualization conferences, and in our discussions with operators and service providers, there remains a lot of confusion surrounding the terms ‘thick’ and ‘thin’ as they relate to customer premises equipment (CPE). This is because the terms are used interchangeably, to describe different market segments, the density of network functions as well as the nature of the CPE itself.

The roots of ‘thick’ and ‘thin’ comes from the term ‘thin client’; a popular reference to a lightweight computer or terminal that depends heavily on a server or server farm to deliver data processing and application support. This contrasts with the PC, which performs these roles independently, and was somewhat disparagingly referred to as a ‘fat client’, or, more neutrally, as a ‘thick client’.

This heritage is important as we look to provide a taxonomy of CPEs, which will hopefully aid our understanding of their respective roles in the delivery of virtualized network services.

Generically, CPE or ‘customer premises equipment’ refers to the equipment provided by a service provider that is then installed with its customers. Historically, CPE referred mainly to the supply of telephony equipment, but today the term encompasses a whole range of operator supplied equipment including routers, switches, voice gateways, set-top boxes as well as home networking adapters.

Thick CPE refers typically to a router or switch that provides network functions at the customer premises. There are now three main types:

See full post

The White-Box CPE: Separating Myths from Reality

the-white-box-cpe-separating-myths-from-reality

In the world of customer premises equipment (CPE), the white-box is a new idea. It should then come as no surprise that misconceptions among operators and CSPs about what constitutes the white-box CPE are common. Here are four of the most prevalent.

Myth #1: The white-box CPE is just commodity hardware.

No. The software-hosting platform is the most important part! Google and Apple have succeeded with smart phones because they created killer platforms, carefully designed to provide a secure environment ready for third party developers to exploit, which enabled their apps base to proliferate. The white-box CPE is no different. Don’t get me wrong, it is still all about using commodity hardware, but this is just the tip of the iceberg. It’s enabling potential stems from the software platform that controls the hardware, not from the hardware itself.

The same goes for the network functions running on the white-box CPE. They need to be installed, chained, activated, run, provisioned, monitored, charged and, of course, done so in a secure manner with maximum efficiency and predictability.
We’re not talking about any old software here, either. This is highly specialist, instance-specific software but unlike smartphones the functions are often dependent on each other. There are lots of open-source components that can carry out each of these tasks individually, but they also need to be integrated, managed with homogeneous, standardized APIs and provided with pre-packaged, documented use cases to facilitate their integration. We often see service providers adopting a DIY approach. But this only lasts until they realize the extent of work and the depth of know-how required to assemble all these pieces together. Building a demonstrator is one thing; warrantying the operational lifecycle over the lifetime of a service is another.

Myth #2: White-box CPEs are for everyone.

The whole idea of white-box CPEs is to foster the ability to take VNFs from multiple vendors and have the freedom to mix-and-match them according to the service provider’s desired functions, price and brand.
This is all good ‘on paper’. The reality, however, is different. Just like when specifying additional options on a car, the bill soon adds up. In fact, imagine being able to walk into a VW dealer and demand options from BMW, Mercedes, Honda and Ford. Welcome to the world of the white-box CPE!

Large enterprise can afford it because, right now, they are buying single-use network appliances and stacking them up in the customer’s premises. The white-box CPE’s promise of appliance consolidation is so great that the economics allow it to be expensive.

See full post

Orchestrating the network beyond the NFVI boundary

orchestrating-the-network-beyond-the-nfvi-boundary

A key promise of SDN/NFV is to enable service providers to offer innovative, on-demand enterprise services with unprecedented flexibility. Building a network capable of delivering flexible, dynamic, customer-tailored services, is however a challenge for service providers. As a matter of fact, end-to-end service orchestration within the virtualized infrastructure and across complex multi-vendor network domains outside the NFVI is anything but a walk in the park.

Standardization bodies, the acceptance of open source solutions and service provider led programs such as AT&T Domain 2.0, Orange SDN for Business or Deutsche Telekom Terastream have driven interoperability in the NFVI and MANO spheres. One major roadblock to capture the full potential of network virtualization is the orchestration of network elements beyond the NFVI boundary, in order to provide full service delivery automation across the network. Yet one key question is how can orchestrators work with networking elements outside the NFVI? As service providers look to transform their networks, the answer to this question is crucial in order to deliver end-to-end enterprise managed services.

The challenge for the orchestrator is to configure all the nodes in the service delivery chain from the customer premises to the NFVI and steer the traffic to the virtual service platform. This integration work across multi-vendor network domains can be substantial as many network devices have to be configured and managed, often with specific proprietary interfaces. Nevertheless, the development of vendor equipment adapters in the orchestration platform to cope with proprietary CLI and varying administration protocols is a prospective cost that industry players want to avoid and is based on a short term view of the problem.

Verizon’s white paper on its SDN/NFV strategy confirms that cross-domain and cross-vendor programmability is key to meet the dynamicity promised by SDN/NFV services. In order to replace a myriad of element management systems (EMS) tied to network elements with proprietary protocols and interfaces, Verizon recommends in the near term to use domain-specific SDN controllers to manage vendor-specific network elements.

Looking at the longer-term perspective, there is an opportunity to unify a diverse set of provisioning and configuration chains under a common NETCONF/YANG umbrella to simplify integration, operation and maintenance. NETCONF/YANG provides a perfect programmatic interface to prolong the orchestration domain and configure end-to-end service chains on-demand by spawning VNFs in the NFVI and steering traffic according to service requirements across the network.
The significant traction of NETCONF/YANG in the multi-vendor orchestration space makes it a good fit to streamline the integration of end-to-end automated SDN/NFV services. The support of NETCONF/YANG by a number of commercial orchestration platforms (Ciena Blue Planet, WebNMS, Cisco NSO,...) and the standardization of YANG modules by the MEF (Metro Ethernet Forum) for the orchestration of Carrier Ethernet 2.0 services add momentum to an already fast moving trend toward a NETCONF/YANG end-to-end orchestrated network model.

See full post

Ultra-compact VNFs are the key to cost-effective NFV

ultra-compact-vnfs-are-the-key-to-cost-effective-nf

OneAccess’ CMO, Pravin Mirchandani, argues that only the most efficient VNFs will pass the scrutiny of the operator community; the risks and costs of accepting anything different are too high.

The arrival of the software-defined networking era has been enthusiastically welcomed by all as it promises both infrastructure and services that can flex and grow in line with operators’ changing requirements. The question how can we transition to an SDN/NFV based infrastructure as quickly possible? is occupying industry minds, service providers and vendors alike. Anyone with lingering doubts need only consider the recent A&M moves by the industry big guns, Juniper and Cisco, who are feverishly trying to reinvent themselves as born again software businesses.

Virtualized network functions (VNFs), delivered over the NFV infrastructure (NFVI), promise to minimize the operator investments needed to customize future services in line with their operational needs. But, at the moment, the jury is still out on what the killer VNFs are going to be. This question raises new concerns: what VNFs should operators plan for when specifying their white box? How will demand for resources play out over the business cycle? Here carriers are facing some tough decisions; ones that may ultimately determine their ability to compete in a crowded sector. An over-specified white box will waste huge amounts of money and already NFV migration is proving much more costly than first thought. Far worse though is the prospect of under-specification, which would result in a virtualized environment that simply isn’t fit for purpose.

The dilemma for the operators can’t be taken lightly. If they deploy basic bare metal units, the risk is lost revenue when customers, who cannot upgrade when needed, move to an alternative supplier. Most likely, a middle ground will be reached, and attention will refocus on the familiar question of how to get more for less. Those that thought that this question might evaporate as network goes software-centric should prepare for disappointment. Operators will be exerting great pressure on VNF developers to do just this, by creating ultra-compact and efficient software functions, not least, so their choice of white-box stands the best chance of coping with as-yet-unknown future demands.

There are many vendors aiming to position themselves in this space which, it seems, is where the long-term revenue opportunity exists. But if they want to deploy a full catalog of VNFS including functions such as WAN optimization, vCPE, VPN and encryption, for example, carriers need to be conscious that many developers hail from an enterprise background, in which their solutions have operated on dedicated appliances drawing on uncontested computing power. VNF development is a different ballgame altogether - so it will be interesting to see how these modules perform when they are scaled down to share the resources of a single white box.

See full post

NFV: The Current State of Play

SDN_NFV_virtualization_virtualizasation_lights_lumires_blue_bleu_tunnel

Act One of the NFV show has finished, leaving operators to sift through the hype, piece together what they have learned and knuckle down to the serious business of design, feasibility and business case development. Pravin Mirchandani, CMO and NFV Evangelist at OneAccess, recounts some sentiments and soundbites from the NFV circuit.

1. Whoa! NFV is expensive!

Operators have now moved beyond best guess, back-of-the-envelope cost estimates. At least some measured CAPEX projections are in, and with them comes a grudging realization of quite how costly NFV is going to be. Why? Because operators must build x86 server farms right across their network in order to host their NFV infrastructure (NFVi); something which is going to mean a significant investment up front. What’s more, because virtualized traffic management requires the NFVi to be distributed (to provide appropriate location of network functions and traffic management right across the geographic reaches of the network), savings can’t be made by consolidating these farms on a single location. Sitting on top of the compute infrastructure, there is of course the software infrastructure and network functions, which also needs to be funded. What this has resulted in is a marked shift in focus to citing OPEX savings, service velocity and service agility as the main justifications for NFV, away from CAPEX reductions

2. We need SLAs, not just I/O

To date, when considering performance, the industry’s focus has been on input/output (I/O) but, given that virtualized network functions (VNFs) are sold as services to paying customers, I/O is only half of the story. To be commercial contenders, VNFs need to be associated with performance guarantees that are enshrined in service level agreements (SLAs). Further, an assessment of compute and memory footprint for each network function is required in order to assess deployment scalability. This is no great challenge where dedicated hardware is concerned, but when the network function is software-based (as with a VNF), located on a shared computing platform, the factors influencing performance are dependent on a range of resource-related variables, making guarantees harder to establish. This area needs serious attention before the NFV can move into a fully commercial phase with the major operators.

3. Pricing is all over the map

Many operators won’t open the door to a VNF vendor without full disclosure of their pricing model, especially as a couple of leading vendors have announced pricing levels that are considered by the operators as unreasonable. Pricing models also remain fragmented between vendors, making it difficult for operators to compare like for like. The software element, in particular, is a minefield. Unsurprisingly, some vendors are applying NFV pricing in accordance with the anticipated impact that NFV will have on their future hardware revenues. This is distorting the market at a very early stage, inhibiting assessment by the operator community.

4. VNF trials are defined by what’s available, not by what’s needed

A lamentable result of the current NFV market is that operators’ choices of VNF trials are being defined by availability, not strategic objectives. vCPE and virtual firewall functions have both been around for a while, but are these two functions the only ones that the operators want to do? Perhaps it’s too early to say. In any case, the real focus of today’s VNF trials is to successfully build the NFVi and nail down the orchestration and management pieces. In this sense, it doesn’t yet matter what the actual VNF is. Over time, this will change. Operators will begin to assess which VNFs are the most important for their business, and which will save them the most money? Ideally, operators should be bringing this thinking forward; if they settle on VNFs that differ from those they have trialled, it will be a struggle to understand the commercial, technical and operational implications.

See full post

NFV: Time to Get Back to (Virtual) Reality, for the Sake of the Operators

SDN_NFV

Pravin Mirchandani, CMO, OneAccess, calls for 'plausible NFV' amid a world of ill-judged proof-of-concepts.

NFV has been voraciously hyped and with good reason; there is much to get excited about. The potential benefits to operators and communication service providers (CSPs) of enabling a virtualized and service oriented network environment are vast: increased network flexibility, additional security, reductions in network OPEX/CAPEX, dynamic capacity adaptation according to network needs and, perhaps most crucial of all, reduced time to market for new, revenue generating network services that can combat declining ARPUs. NFV really could be the silver bullet that operators and CSPs have been looking for.

But there’s a storm brewing for 2015. So excited has the networking industry become that its NFV gaze has focused almost universally on the end-game: an idealized world in which new services are ‘turned up’ as part of a complete virtualized service chain. Perilously little has been said about how operators will migrate to utopia from the battlegrounds of today.

To date, the central migration message coming from the big five networking vendors has been: ‘Trust us. We’ll get you there.’ Needless to say operators, whose collective future may be determined by their success with NFV, are far from comforted by such assurances. Many have endured vendor lock-in for decades and, as a result, are rightly viewing this first wave of proprietary NFV proof-of-concepts (POCs) with a healthy dose of scepticism. Given a viable and open alternative, NFV could be their chance to break free.

It’s not only vendor lock-in that operators should fear. In their haste to establish NFV dominance, many vendors have NFV-ized their existing lines of routers and switches by installing x86 cards and are now conducting operator POCs via this generic computing environment. This is sledgehammer NFV in action; it may prove that the theory behind NFV is possible, but it is seriously lacking in plausibility when any kind of scaled migration path is considered. Cash-strapped operators are highly unlikely to stomach the significant price premium required to install x86 cards across their entire CPE infrastructure. Moreover, x86 does not always deliver the optimized performance needed for the volume packet handling and SLA requirements for today’s network services, and in the operators’ last-mile network, there are far too many access link combinations required to enable the physical hardware to be done away with any time soon. ADSL, VDSL, S.HDSL, among others, plus cellular for radio access (frequently used for backup), together with SFP ports to support different fiber speeds and optical standards, are not readily available in an x86 platform, and could only be made so at a prohibitive cost.

See full post

SDN/NFV – Is it the breakthrough CSPs need to help level the OTT playing field?

acceleration-techniques-for-white-box-cpe

I think that it is fair to say that the business communications services market is going through one of its most challenging periods at the moment as CSPs struggle to come to terms with the demand for faster, more reliable and feature-rich services from their customers, whilst also dealing with increased competition and the inevitable pressure this puts on prices. Although this is a typical and predictable scenario in what is a rapidly maturing tech-based market, it means that service providers cannot afford to assume that their customers will continue to renew their contracts out of a sense of loyalty, no matter what.

With increased choices for core communications requirements such as telephony and business application services readily available from specialist 3rd party providers in the Cloud, the CSPs’ pipe is now in danger of becoming regarded as just another basic utility along with the mains power and water services any organization needs to function. The challenge for CSPs is to find ways to tap into the new revenue potential, and compensate for ARPU decreases, by offering innovative new services themselves and fighting back against the OTT players who are increasingly eating more of their lunch. But it is not easy to see how this can be achieved without a radical change to their existing core infrastructure and CPE access technologies.

Given this challenging picture, the arrival of SDN/NFV technology on the scene could be the timely and welcome development CSPs have been looking for. With the potential to level the playing field and enabling the rapid rollout of new services virtually on demand, service providers could realistically begin to compete on price, features and functionality with pure-play hosted service operators.

SDN, and NFV functionality in particular provides the prospect of enabling service providers to add new features or switch existing services on and off in alignment with customers’ changing needs without having to install new network edge devices or change the CPE. When combined with the ability to remotely manage and provision unlimited numbers of individual routers from a central office location it means the proposition could literally be a game-changer for service providers.

SDN/NFV however, represents both opportunity and challenge for CSPs. Its promise of agility, flexibility and reduced costs are highly attractive but implementing SDN/NFV across their current silo-based organization and changing well-established business practices will be a non-trivial and lengthy set of people-oriented tasks to navigate. CSPs are also looking to vendors to progress beyond architectural and vision statements and show them some real use-cases for this new technology.

See full post

Latest News

  • EKINOPS completes the acquisition of OTN technology from Padtec

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of telecommunications solutions for telecom operators, today completes the acquisition of the OTN-Switch (Optical Transport Network) platform developed by Padtec, an optical communications system manufacturer based in Brazil.

     
  • A record 2nd quarter with sequential growth of 17%. H1 2019: revenue of €45 million and expected improvement in EBITDA margin

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of telecommunications solutions for telecom operators, has published its revenue for the second quarter of 2019.

     
  • EKINOPS Launches Channel Partner Program in EMEA and APAC

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of optical transport equipment and router solutions, today announces the launch of the EKINOPS Channel Partner Program (ECPP). The program has been designed to support value-added resellers (VARs) and system integrators to differentiate in the market by providing them with the opportunity to build, sell and deliver solutions tailored to their customer needs, while still benefitting from the Ekinops’ extensive knowledge, resources and expertise.

     

EKINOPS Worldwide

EKINOPS EMEA & APAC
Telephone +33 (0)1 77 71 12 00

EKINOPS AMERICAS
Telephone +1 (571) 385-4103

 

E-MAIL ALERTS

Receive automatically EKINOPS information in your inbox!