Menu

EKINOPS en français

WELCOME TO

EKINOPS BLOG

Unpacking the technologies behind the Zero-Touch Provisioning of a universal CPE

uCPE

Explore the combination of technologies that enable remote provisioning and management of the uCPE and the VNFs this powerful new device supports.

A Universal Customer Premises Equipment (uCPE) consists of both software and hardware components to create a small virtualization platform at the customer premises and which is capable of running multiple Virtual Network Functions (VNFs) in a local service chain. This is similar to running Virtualized Network Functions in the datacentre but at a smaller scale. This enables Communication Service Providers (CSPs) to disaggregate software and hardware at the CPE level and provides them with unprecedented flexibility to run any type of service on the same commoditized hardware platform.

The services delivered by this programmable end-user device are in general controlled by Next Generation Service Orchestrators, who take care of the service configuration aspects of the delivered services. Another level of Orchestration concerns the deployment of the uCPE in the field. One of the challenges is to minimize its deployment cost using zero-touch provisioning. Pushing a new configuration to a uCPE is more complicated than to legacy CPEs because not only the configuration of the uCPE needs to be pushed to the device, but also the service chaining topology and the VNF images with their initial configurations. Using the NETCONF/YANG protocol however it is possible to push the complete initial configuration to the uCPE including the service chaining configuration and the VNF images with their initial start-up configuration. The initial communication with the provisioning server can be achieved using the NETCONF Call Home functionality, which allows the CPE to identify itself to the provisioning server and receive the correct configuration associated with the customer where the device is installed.

With zero-touch provisioning it is possible to install a uCPE and its configuration in an automated way. In many cases, however, end-to-end orchestration systems don’t support zero-touch provisioning yet or provisioning systems are not in place or sufficiently mature to support this level of automation.

In addition to the OneAccess-branded uCPE hardware (OVP or Open Virtualized Platform) and software (LIM or Local Infrastructure Manager), EKINOPS also offers OneManage to provide a solution for zero-touch provisioning of uCPEs based on a service catalog. OneManage supports a northbound interface to interface with OSS/BSS systems to receive customer-related data associated with a new uCPE deployment. In this way OneManage is an infrastructure orchestrator or sub-orchestrator, taking care of the provisioning of the uCPEs and the management of the installed uCPE base.

See full post

Fast Caterpillars: Solving the Complexity Crisis in Carrier Automation

blog_fast_caterpillar

A new focus on simplicity and standardization holds the key, says Pravin Mirchandani, CMO, OneAccess.

At the recent Zero-Touch and Carrier Automation Congress in Madrid, I was reminded of a quote from George Westerman of MIT:

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar.”

One of the most debated topics at the show was how to deal with issues of complexity when enabling automation in operators’ networks. The current processes designed to manage these networks have been designed to make human intervention and management relatively efficient. In the new world of automation, however, even minimal human intervention is problematic. Despite the industry’s best efforts, translating complex human decision-making into a computer-readable, algorithm-friendly process remains a real challenge.

Redesigning around simplicity

Indeed multiple speakers at the show agreed that trying to automate a complex set of processes was actually the wrong approach, and that a fresh take was needed. Starting again, to define new, simpler processes which focus on enabling zero-touch provisioning and automation, offers a more logical and achievable route forward.

See full post

Dissonance in the World of Virtualization

dissonance-in-the-world-of-virtualization

A collection of thoughts has been spinning in my head over the last few weeks based on various customer visits, presentations and panel debates I have seen or participated in recently and they have collectively formed into a common notion, that of dissonance. These reflections have come about as a result of number of unresolved issues that are either being ignored or are falling between the cracks as we all grapple with the complexities of introducing virtualization.

I’ll start with MANO. Many of us know that there is a competition between Open Source MANO sponsored principally by Telefonica’s I+D entity (its research and development arm) and ONAP (Open Network Automation Platform), which combines the outputs of AT&T’s homegrown MANO system called ECOMP and Open-O (Open Orchestrator Project) into a common open-source project. Setting aside the architectural differences between these two major MANO initiatives and their respective levels of industry take-up, what’s increasingly obvious is that both are horrendously complex and quite frankly beyond the means of most operators to implement. Open-source only saves the cost of acquisition, not of integration and deployment. That’s why many operators, both large and small, are sitting on the sidelines waiting for a viable solution to present itself. This might quite possibly be in the form of MANO-as-a-service, which is already attracting the attention of business managers at professional services outfits and even venture-capital funding, which reminds me that Sun Tzu said in the Art of War: “In the midst of chaos, there is also opportunity”.

Another dissonance that strikes me is that between the CTO offices in the vanguard of introducing virtualization and their own business managers. It’s not just that virtualization is proving horribly expensive to introduce and therefore difficult to convincingly flesh out an ROI spreadsheet. Rather the technical people have become so absorbed by the scope and scale of the technical challenges in front of them that they have collectively lost sight of the end goal: how to market virtualized network services and deliver benefits to their end-customers. A recent big theme at conferences has been the challenges involved in on-boarding new VNFs, over a 100 in some cases. My questions though are: who needs all this choice; how can customers without big IT budgets select which are the right ones for them; what is the benefit for these end-users as opposed to what they are doing today (see my recent blog entitled: ‘Hello from the Other Side: The Customer Benefits of NFV’); and indeed what’s in it for the operators – how are they going to make money when it’s not at all clear which VNFs they can effectively sell to the volume part of the enterprise market as managed services, i.e. where they make money?

There is also increasing dissonance on the vendor side – and recently several vendors have openly voiced their frustrations on this point - virtualizing products requires investment and until NFV starts moving out of the labs into volume service deployments we all are investing money based on the hope of generating payback at some uncertain point in the future.

The other huge dissonance is that all the investment in next-generation virtualization efforts has eaten most of the IT resources for introducing new services and platforms for the operators mainstream existing business. The number of delayed service launches atoperators due to ‘IT budget issues’ or ‘lack of available personnel’ is now an industry-wide phenomenon and delaying cost reduction and business generation service initiatives in the market. This is ironic as virtualization is meant to accelerate service delivery, not stall it.

See full post

ROI for white-box CPEs: a question of segmentation

ROI

Operators commonly expect that moving to SDN and NFV will enable cost reductions, through efficiency gains in network management and device consolidation. As they get closer to deployment, however, commercial reality is telling a different story, depending on the market segment being addressed.

One area where the ROI for white-box CPEs is easy to justify is appliance consolidation. If you can consolidate a number of proprietary appliances into a single white-box CPE then the CAPEX savings are clear. Chaining, for instance, a vRouter, a WAN optimizer and a next-generation firewall into one single x86 appliance (i.e. a white-box CPE) delivers immediately identifiable cost savings: one appliance instead of three is basically the formula and this is a commonly targeted combination. Combine this with the prospect of increased wallet share from large enterprises, which often run their networks themselves in do-it-yourself mode, and the large enterprise segment looks increasingly attractive for operators.

Let’s be clear, though: this is just a large enterprise play. SMBs and multi-site deployments for government or highly distributed organizations have no need for WAN optimization and little need for a next-generation firewall; the on-board firewall that comes with their router together with a PC-based anti-virus subscription and email antispam service are usually sufficient. As a result, anyone working on building business cases for white-box CPEs for the volume part of the market will attest that ROI a tough nut to crack.

The draw for this market segment is the potential to increase ARPU by making it easier and more flexible to use additional services through automated service delivery via virtualization.

In terms of hardware CAPEX, the cost of white-box CPE deployment outstrips that of traditional CPEs. For the large enterprise segment which often deploys multiple appliances, this cost increase is compensated by reducing the number of appliances. For other market segments, where a single CPE is more typically deployed, savings need to come from OPEX reductions or TCO savings. The latter, however, is notoriously difficult to calculate and is usually irrelevant in a context where staff reductions are difficult to achieve, particularly in a period of technology transition.

See full post

Fifty Shades of NFV?

fifty_shades_of_nfv

In the racy world of CPE architecture, what virtualization-hungry service providers say they want isn’t always what they need, says Pravin Mirchandani, CMO, EKINOPS.

Alright, perhaps ‘racy’ is going a bit far, but as the virtualization industry moves out of ‘does it work’ and into ‘let’s make it happen’, pulses are certainly starting to quicken. Not least because service providers are having to make tough calls about how to architect their management and orchestration (MANO). Many of these decisions revolve around the deployment of virtualized network functions (VNFs), via some form of customer premises equipment (CPE).

Several ‘shades’ are emerging, each with their advantages and drawbacks.

The ‘NETCONF-enabled CPE’ model emulates what we have today: a fixed number of physical network functions (note: not virtual) are embedded into a traditional L3 multi-service access router. The key difference here is that the router, as its name suggests, supports the NETCONF management protocol and can, as result, be managed in a virtualized environment. In truth, this is a pretty rudimentary form of virtualization; the router can be managed by a next-generation OSS with NETCONF and its embedded physical functions can be turned on and off remotely, but that’s about it. The device is not reprogrammable, nor can its network functions be removed or replaced with alternatives. The market for this deployment model lies in two use-cases: Firstly, as a bridging solution enabling service providers to co-operate traditional and virtualized network services simultaneously, facilitating migration. Secondly, given that many of today’s VNFs are heavy and need considerable amounts of memory and processing resources in order to operate, the more flexible white-box alternatives are costly in comparison. Specialist vendors like OneAccess have been developing dedicated CPE appliances (with embedded PNFs) for years, where compact and efficient code has always been a design goal in order to keep appliance costs under control. For more conservative operators that are keen to get ‘in the game’, the proven reliability and comparative cost efficiency of this model can offset its relatively limited flexibility. Rome wasn’t built in a day and some operators will prefer to nail the centralized management and orchestration piece before investing heavily in pure-play virtualization appliances for the network’s edge.

A purer approach is to invest in a ‘thick branch CPE’ or, in other words, an x86-based white-box solution running Linux, onto which VNF packages can be either pre-loaded and, in the future, removed and replaced or even selected by customers via, say, a web portal. This approach delivers far greater flexibility and is truer to the original promise of NFV, in which the network’s functions and components can be dismantled and recomposed in order to adjust a service offer. The snag, however is that white-box CPEs come at a cost. More memory and more processing power mean more cash. That’s why the race is on to develop compact VNFs, so they can minimize processing requirements and, as a result, enable a limited spec white-box to do more, with less. Again, unsurprisingly, those ahead of the curve are VNF vendors that have the experience of wringing every last drop of performance out of compact and cost-efficient appliances, purpose-designed for operators and service providers.

See full post

The White-Box CPE: Separating Myths from Reality

the-white-box-cpe-separating-myths-from-reality

In the world of customer premises equipment (CPE), the white-box is a new idea. It should then come as no surprise that misconceptions among operators and CSPs about what constitutes the white-box CPE are common. Here are four of the most prevalent.

Myth #1: The white-box CPE is just commodity hardware.

No. The software-hosting platform is the most important part! Google and Apple have succeeded with smart phones because they created killer platforms, carefully designed to provide a secure environment ready for third party developers to exploit, which enabled their apps base to proliferate. The white-box CPE is no different. Don’t get me wrong, it is still all about using commodity hardware, but this is just the tip of the iceberg. It’s enabling potential stems from the software platform that controls the hardware, not from the hardware itself.

The same goes for the network functions running on the white-box CPE. They need to be installed, chained, activated, run, provisioned, monitored, charged and, of course, done so in a secure manner with maximum efficiency and predictability.
We’re not talking about any old software here, either. This is highly specialist, instance-specific software but unlike smartphones the functions are often dependent on each other. There are lots of open-source components that can carry out each of these tasks individually, but they also need to be integrated, managed with homogeneous, standardized APIs and provided with pre-packaged, documented use cases to facilitate their integration. We often see service providers adopting a DIY approach. But this only lasts until they realize the extent of work and the depth of know-how required to assemble all these pieces together. Building a demonstrator is one thing; warrantying the operational lifecycle over the lifetime of a service is another.

Myth #2: White-box CPEs are for everyone.

The whole idea of white-box CPEs is to foster the ability to take VNFs from multiple vendors and have the freedom to mix-and-match them according to the service provider’s desired functions, price and brand.
This is all good ‘on paper’. The reality, however, is different. Just like when specifying additional options on a car, the bill soon adds up. In fact, imagine being able to walk into a VW dealer and demand options from BMW, Mercedes, Honda and Ford. Welcome to the world of the white-box CPE!

Large enterprise can afford it because, right now, they are buying single-use network appliances and stacking them up in the customer’s premises. The white-box CPE’s promise of appliance consolidation is so great that the economics allow it to be expensive.

See full post

Ultra-compact VNFs are the key to cost-effective NFV

ultra-compact-vnfs-are-the-key-to-cost-effective-nf

OneAccess’ CMO, Pravin Mirchandani, argues that only the most efficient VNFs will pass the scrutiny of the operator community; the risks and costs of accepting anything different are too high.

The arrival of the software-defined networking era has been enthusiastically welcomed by all as it promises both infrastructure and services that can flex and grow in line with operators’ changing requirements. The question how can we transition to an SDN/NFV based infrastructure as quickly possible? is occupying industry minds, service providers and vendors alike. Anyone with lingering doubts need only consider the recent A&M moves by the industry big guns, Juniper and Cisco, who are feverishly trying to reinvent themselves as born again software businesses.

Virtualized network functions (VNFs), delivered over the NFV infrastructure (NFVI), promise to minimize the operator investments needed to customize future services in line with their operational needs. But, at the moment, the jury is still out on what the killer VNFs are going to be. This question raises new concerns: what VNFs should operators plan for when specifying their white box? How will demand for resources play out over the business cycle? Here carriers are facing some tough decisions; ones that may ultimately determine their ability to compete in a crowded sector. An over-specified white box will waste huge amounts of money and already NFV migration is proving much more costly than first thought. Far worse though is the prospect of under-specification, which would result in a virtualized environment that simply isn’t fit for purpose.

The dilemma for the operators can’t be taken lightly. If they deploy basic bare metal units, the risk is lost revenue when customers, who cannot upgrade when needed, move to an alternative supplier. Most likely, a middle ground will be reached, and attention will refocus on the familiar question of how to get more for less. Those that thought that this question might evaporate as network goes software-centric should prepare for disappointment. Operators will be exerting great pressure on VNF developers to do just this, by creating ultra-compact and efficient software functions, not least, so their choice of white-box stands the best chance of coping with as-yet-unknown future demands.

There are many vendors aiming to position themselves in this space which, it seems, is where the long-term revenue opportunity exists. But if they want to deploy a full catalog of VNFS including functions such as WAN optimization, vCPE, VPN and encryption, for example, carriers need to be conscious that many developers hail from an enterprise background, in which their solutions have operated on dedicated appliances drawing on uncontested computing power. VNF development is a different ballgame altogether - so it will be interesting to see how these modules perform when they are scaled down to share the resources of a single white box.

See full post

Virtualization means changing the approach to proof-of-concept projects for both Carriers and Vendors

virtualization-means-changing-the-approach-to-proof-of-concept-projects-for-both-carriers-and-vendor

OneAccess CTO, Antoine Clerget argues that vendors need to radically re-think their approach to PoC projects as carriers begin to plan their transition to software-defined network functions.

Until quite recently, when Telcos wanted to evaluate the different vendor’s technologies needed to build out new service platforms, the process was relatively straightforward. Typically, it meant plugging a box into the lab network and running the relevant functional and performance tests and then, assuming that the results were acceptable, handing things over to the commercial and legal teams to thrash out the supply contracts. Well, perhaps a bit more complicated than that but nevertheless a far simpler proposition than the one today’s engineers face when demonstrating or assessing NFV products.

Unlike when the CPE router came as a discrete device with a range of physical components on board, today individual network functions are de-composed into discrete elements and then stitched together in the virtualization infrastructure. In the new NFV environment, resources are to some extent shared with other (potentially third party) VNFs and the underlying infrastructure and VNFs run on some hardware unknown to the VNF vendor. As a consequence, moving from independent elements to a complete solution, even if it sits in a single hardware equipment, requires new types of integration skills. This means that a new and different approach is needed with a key focus being on integration, particularly on how management, functional and infrastructure elements work together in an environment where there are still a lot of unknowns.

After a remarkably short period of technical debate between the various actors in the network industry, we are now seeing a definite upswing in interest in testing the claims of SDN and NFV technologies in terms of genuine PoCs. This is especially true among those Telcos that are looking to future-proof their network infrastructures to achieve their major goals of flexibility for market differentiation and programmability for reducing costs.

As an early champion of the move to a more white-box /VNF approach to CPE architecture we see this as a natural progression, building on our existing multi-functional router platforms, which already include an extensive range of software modules for specific network functions and security. However, at the same time this has meant a total re-think on what is needed for a PoC project to be successful. With more emphasis on the need to answer questions about the interoperability of the technology in this new and highly dynamic virtualized environment, it means that our engineering teams need to take a much more direct, hands-on involvement in the process than was previously the case.

See full post

Latest News

  • EKINOPS completes the acquisition of OTN technology from Padtec

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of telecommunications solutions for telecom operators, today completes the acquisition of the OTN-Switch (Optical Transport Network) platform developed by Padtec, an optical communications system manufacturer based in Brazil.

     
  • A record 2nd quarter with sequential growth of 17%. H1 2019: revenue of €45 million and expected improvement in EBITDA margin

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of telecommunications solutions for telecom operators, has published its revenue for the second quarter of 2019.

     
  • EKINOPS Launches Channel Partner Program in EMEA and APAC

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of optical transport equipment and router solutions, today announces the launch of the EKINOPS Channel Partner Program (ECPP). The program has been designed to support value-added resellers (VARs) and system integrators to differentiate in the market by providing them with the opportunity to build, sell and deliver solutions tailored to their customer needs, while still benefitting from the Ekinops’ extensive knowledge, resources and expertise.

     

EKINOPS Worldwide

EKINOPS EMEA & APAC
Telephone +33 (0)1 77 71 12 00

EKINOPS AMERICAS
Telephone +1 (571) 385-4103

 

E-MAIL ALERTS

Receive automatically EKINOPS information in your inbox!