Virtualization means changing the approach to proof-of-concept projects for both Carriers and Vendors

OneAccess CTO, Antoine Clerget argues that vendors need to radically re-think their approach to PoC projects as carriers begin to plan their transition to software-defined network functions.

Until quite recently, when Telcos wanted to evaluate the different vendor’s technologies needed to build out new service platforms, the process was relatively straightforward. Typically, it meant plugging a box into the lab network and running the relevant functional and performance tests and then, assuming that the results were acceptable, handing things over to the commercial and legal teams to thrash out the supply contracts. Well, perhaps a bit more complicated than that but nevertheless a far simpler proposition than the one today’s engineers face when demonstrating or assessing NFV products.

Unlike when the CPE router came as a discrete device with a range of physical components on board, today individual network functions are de-composed into discrete elements and then stitched together in the virtualization infrastructure. In the new NFV environment, resources are to some extent shared with other (potentially third party) VNFs and the underlying infrastructure and VNFs run on some hardware unknown to the VNF vendor. As a consequence, moving from independent elements to a complete solution, even if it sits in a single hardware equipment, requires new types of integration skills. This means that a new and different approach is needed with a key focus being on integration, particularly on how management, functional and infrastructure elements work together in an environment where there are still a lot of unknowns.

After a remarkably short period of technical debate between the various actors in the network industry, we are now seeing a definite upswing in interest in testing the claims of SDN and NFV technologies in terms of genuine PoCs. This is especially true among those Telcos that are looking to future-proof their network infrastructures to achieve their major goals of flexibility for market differentiation and programmability for reducing costs.

As an early champion of the move to a more white-box /VNF approach to CPE architecture we see this as a natural progression, building on our existing multi-functional router platforms, which already include an extensive range of software modules for specific network functions and security. However, at the same time this has meant a total re-think on what is needed for a PoC project to be successful. With more emphasis on the need to answer questions about the interoperability of the technology in this new and highly dynamic virtualized environment, it means that our engineering teams need to take a much more direct, hands-on involvement in the process than was previously the case.

This is reflected throughout the PoC process, both in terms of having one or more virtualization experts on the ground with their sleeves rolled-up, as well as with remote support from engineering back at base. With many Telcos on a steep learning curve as far as how best to adapt their network architecture to this new virtualized world, it is particularly important to be able to demonstrate how the technology can work together with an array of different systems, both virtualized and legacy.

This, combined with the evolving nature of the technology and products, means it is difficult to be able to anticipate what questions and scenarios will arise as the PoC progresses. Telco teams are often assessing multiple technologies and products within a short period of time, which inevitably leads to a lot of spontaneous and what-if scenarios. For instance: ‘oh, you support that feature in the DPDK; can we try it and see if it leads to a performance improvement’ or ‘now that we have seen your vCPE working, can we test and see if we can chain it using NETCONF with these two functions from other vendors’, or ‘can we see which combinations of functions can actually be chained on a 4-core ATOM as opposed to this Broadwell platform we just received yesterday’?
A key question is with regards to integrating third party virtual elements (VNFs), which can theoretically be done with almost no effort. As a consequence, it is always tempting in a POC to explore new use cases. As a consequence, you may end up with very complex scenarios working very quickly, but may also be suddenly confronted with issues whose resolution require may different stakeholders and expertise.

Most hardware equipment today guarantee performance and behavior given some "typical" working conditions (mostly the nature of the traffic, such as packet size, e.g. IMIX). Based on these, one can quite easily predict the overall performance and behavior of a system that integrates various physical elements. When it comes to VNFs, the performance and behavior of the network function depends on the resources allocated to it at one point in time. There is thus a need to broaden the concept of "typical" working conditions - from the nature of the traffic to the resources, hardware capabilities, and their dynamics. Until this is stabilized and standardized, it will remain difficult to anticipate, test, and certify the "performance" or the "density" of the various components/VNFs that make up the system.

What is clear is that in most cases, the testing experience itself leads very quickly to new options and challenges and this can move over time as new vendors and new versions of products come into the labs. To deal with all these curved balls, we’ve learnt that it pays to have our experts immediately on hand. Indeed we often learn as much about the state of the technology as our customers do!

While this is obviously a major commitment, our experience to date has proven this to be a worthwhile investment and appreciated by the Telcos involved. For them, going through the PoC process is also a major commitment that can tie up their key people for several days for each vendor solution they want to evaluate. If they have to wait for answers to arrive via email, maybe several hours or days later, it can mean that in the meantime any focus or momentum has gone, leading to lost time and assessment delays.

Fundamentally the whole industry is in the midst of a massive learning curve and virtualization demos and POCs are revealing how dynamic and untested the various de-composed technology elements are. When dealing with new, market-disruptive technologies it is even more important that all the necessary resources are made available to not only prove the technology actually works as claimed, and adapt it if not, but also to demonstrate the depth of knowledge and support that will ultimately be needed when the project moves to production.