Learn more about the OneAccess portfolio : an extensive range of physical or virtualized access platforms: multi-service routers, Ethernet access devices, white-box CPE and VNFs for the agile delivery of managed enterprise services
Learn more

WELCOME TO

EKINOPS BLOG

Pravin Mirchandani joined EKINOPS (formerly OneAccess) as Chief Marketing Officer in May 2011. A graduate of both the University of Edinburgh and the London School of Economics, with more than 25 years experience in the Telecoms industry, Pravin has held key roles in Marketing, Product Development and Sales at major telecom equipment manufacturers and software vendors such as Bay Networks, Nortel, Orchestream and Codima Technologies. Most recently, Pravin was CEO at Syphan Technologies UK, an innovative organisation providing security services to Managed Service Providers (MSPs).

Fake Virtualization

Fake Virtualization

Pravin Mirchandani, CMO, OneAccess (an EKINOPS brand), dissects a poignant phrase coined at this week’s SDN NFV World Congress.

On day one of this year’s SDN NFV World Congress in The Hague, I was intrigued by a phrase used in the keynote: ‘fake virtualization’.

The speaker, Jehanne Savi, Executive Leader, All-IP & On-Demand Networks programs at Orange, used it to describe to describe the status of the industry. It was a powerful accusation but, aside from being tuned in to the zeitgeist of our times, what did she really mean?

Did she mean vendors were trumpeting false claims about their solutions? Or was she suggesting that they were failing to disaggregate network functions properly? Perhaps she was pointing to a failure in software abstraction, or a lack of software-defined control or, even worse, a recourse to proprietary APIs or protocols. The longer I thought about it, the more interpretations sprang to mind.

Her true meaning became clear towards the end of her keynote, when she used the words ‘vertical solutions’ as a direct translation for her neologism. Only then did the links between her various arguments appear. This wasn’t a single instance of fake news or testimony, but rather a broader lamentation on the lock-in nature of mainstream virtualization solutions; on the industry’s failure to enable an open, multi-vendor landscape that would give operators the choice and flexibility they sought to build new competitive solutions.

See full post

Fast Caterpillars: Solving the Complexity Crisis in Carrier Automation

blog_fast_caterpillar

A new focus on simplicity and standardization holds the key, says Pravin Mirchandani, CMO, OneAccess.

At the recent Zero-Touch and Carrier Automation Congress in Madrid, I was reminded of a quote from George Westerman of MIT:

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar.”

One of the most debated topics at the show was how to deal with issues of complexity when enabling automation in operators’ networks. The current processes designed to manage these networks have been designed to make human intervention and management relatively efficient. In the new world of automation, however, even minimal human intervention is problematic. Despite the industry’s best efforts, translating complex human decision-making into a computer-readable, algorithm-friendly process remains a real challenge.

Redesigning around simplicity

Indeed multiple speakers at the show agreed that trying to automate a complex set of processes was actually the wrong approach, and that a fresh take was needed. Starting again, to define new, simpler processes which focus on enabling zero-touch provisioning and automation, offers a more logical and achievable route forward.

See full post

Agility is the Goal and Lack of it the Handbrake

blog_agility

When looking at the virtualization landscape in its broader sense I can’t help seeing that the real problem is agility or rather lack of it.

Take a look at SDN, whose main premise is to separate control from the data plane and thereby achieve high agility for its users. Same thing for NFV: abstract the software from the hardware and then combine and deploy software functions at will for high agility. So the operators, at least the bigger ones who have the least agility, are investing megabucks to deliver on the twin promises of SDN and NFV. What’s frustrating for them is that they are doing so lamentably slowly and in a decidedly non-agile manner.

In a sense it’s not so surprising. In effect, the operators are trying to re-engineer 15 years of painstakingly built managed services processes in a very short period of time. And they are doing so using a combination of new operational and management techniques, new albeit commonly available hardware, re-packaged software and co-opting their existing set of waterfall processes and organisational silos. This is like pressing on the accelerator with the handbrake on. Some, e.g. Verizon and belatedly AT&T, are trying to loosen the rigid structure of their organisations to create a more agile decision-making process and a Devops culture where network specialists and IT folk combine to break down internal barriers and loosen their internal disk-brakes.

It’s when we take a look at the arrival of SD-WAN that we see the real impact of lack of agility. SD-WAN vendors position their wares to enterprises with self-select portals, centralised policy control and the ability to take advantage of cheaper Internet links to deal with all the Cloud-based, Internet services that are swamping their traditional operator-provided WAN connections. This puts control and thus agility in the hands of the enterprises and this is precisely what they want.

The response of the operators to SD-WAN however is telling. As opposed to SDN and NFV, where they are re-engineering their processes with open APIs to (slowly) deliver on the promise of virtualization, they have taken a very different tack with SD-WAN. For the most part, they have:

See full post

When the Rubber (Finally) Hits the Road: 2018 Predictions

9H_final

Looking ahead to 2018, Pravin Mirchandani, CMO, OneAccess Networks, anticipates a year of challenge and opportunity for operators.

Europe Starts to Get Serious about Virtualization

After a long period of experimenting with virtualization technology in their labs, European Telcos (or at least some of them) will get serious about introducing virtualized services for their enterprise customers. This is clearly apparent at Deutsche Telecom, in particular, but also at Orange and BT. I’d still hesitate to predict that we will see many actual service launches in Europe but nonetheless decisions will be taken, budgets will be committed and the new product introduction (NPI) work for virtualized services will begin.

The Cost of White-Box CPEs will be Driven Down but Not by Price Reduction

It’s clear that all the big Telcos are convinced about the benefits of an on-premise white-box strategy and while in 2017 they debated about how to move from grey-boxes, principally from Cisco and Juniper, right now they have a different problem: cost, particularly for the appliance they need for the volume part of the enterprise market, commonly known as the ‘small uCPE’.

Yet if half the cost of a white-box derives from a monopoly vendor – Intel – then, in the absence of competition (hint: think ARM), the only way to reduce costs will be to moderate demands on it. This will come from two directions. The business managers at the Telcos will insist on a smaller set of VNF requirements to reduce the number of cores and memory required (the two key drivers of costs for a white-box appliance) and the VNF vendors will gradually reduce their resource footprint in response to the Telcos’ demands.

The Operators Will Realise that SD-WAN is Actually a Marketing Problem

SD-WAN puts choice in the hands of enterprises and does so at reduced cost with automation removing complexity, a winning combination that is taking business away from many Telcos. So far, the Telcos have looked at this principally as a technology problem: how to build self-select portals to introduce choice for their customers, how to automate their back-end processes and how to co-opt SD-WAN technology without vendor lock-in.

See full post

Why the time to move to NETCONF/YANG is now

time_to_move_netconf

With backing from major NFV/SDN projects and industry organizations, NETCONF, the client/server protocol designed to configure network devices more clearly and effectively, will soon become ubiquitous. Operators who migrate to NETCONF can both future-proof their operations for NFV and can also reap some of short-term benefits of automation, today.

Why NETCONF/YANG?

By addressing the shortcomings of existing network configuration protocols like Simple Network Management Protocol (SNMP) and Command Line Interface (CLI), NETCONF was developed to enable more efficient and effective network management. SNMP has long been cast off by operators and CSPs as hopelessly complicated and difficult to decipher. CLI may be readable by a network engineer, but it is prone to human error and can lead to vendor lock-in, since propriertary implementations often mean that only one vendor’s element management system can manage their network elements.

NETCONF, on the other hand, is designed specifically with programmability in mind, making it perfect for an automated, software-based environment. It enables a range of functions to be delivered automatically in the network, while maintaining flexibility and vendor independence (by removing the network’s dependence on device-specific CLI scripts). NETCONF also offers network management that is not only human readable, but also supports operations like transaction-based provisioning, querying, editing and deletion of configuration data.

YANG is the telecom-specific modelling language that makes NETCONF useful, by describing device configuration and state information that can be transported via the NETCONF protocol. The configuration is plain text and human-readable, plus it’s easy to copy and paste and compare between devices and services. Together, NETCONF and YANG can deliver a thus far elusive mix of predictability and automation.

NFV needs NETCONF

What’s even more powerful is that NETCONF and YANG combined offer the flexibility needed to manage both virtual and physical devices. This means that operators can get going with NETCONF now, before they start ripping out old devices and replacing them with white boxes. This is a necessary investment in the future; as we look forward to a virtualized networking environment, where network functions are spun up and changed continuously, the high level of automation enabled by NETCONF/YANG is not just preferable, it is essential.

See full post

Dissonance in the World of Virtualization

dissonance-in-the-world-of-virtualization

A collection of thoughts has been spinning in my head over the last few weeks based on various customer visits, presentations and panel debates I have seen or participated in recently and they have collectively formed into a common notion, that of dissonance. These reflections have come about as a result of number of unresolved issues that are either being ignored or are falling between the cracks as we all grapple with the complexities of introducing virtualization.

I’ll start with MANO. Many of us know that there is a competition between Open Source MANO sponsored principally by Telefonica’s I+D entity (its research and development arm) and ONAP (Open Network Automation Platform), which combines the outputs of AT&T’s homegrown MANO system called ECOMP and Open-O (Open Orchestrator Project) into a common open-source project. Setting aside the architectural differences between these two major MANO initiatives and their respective levels of industry take-up, what’s increasingly obvious is that both are horrendously complex and quite frankly beyond the means of most operators to implement. Open-source only saves the cost of acquisition, not of integration and deployment. That’s why many operators, both large and small, are sitting on the sidelines waiting for a viable solution to present itself. This might quite possibly be in the form of MANO-as-a-service, which is already attracting the attention of business managers at professional services outfits and even venture-capital funding, which reminds me that Sun Tzu said in the Art of War: “In the midst of chaos, there is also opportunity”.

Another dissonance that strikes me is that between the CTO offices in the vanguard of introducing virtualization and their own business managers. It’s not just that virtualization is proving horribly expensive to introduce and therefore difficult to convincingly flesh out an ROI spreadsheet. Rather the technical people have become so absorbed by the scope and scale of the technical challenges in front of them that they have collectively lost sight of the end goal: how to market virtualized network services and deliver benefits to their end-customers. A recent big theme at conferences has been the challenges involved in on-boarding new VNFs, over a 100 in some cases. My questions though are: who needs all this choice; how can customers without big IT budgets select which are the right ones for them; what is the benefit for these end-users as opposed to what they are doing today (see my recent blog entitled: ‘Hello from the Other Side: The Customer Benefits of NFV’); and indeed what’s in it for the operators – how are they going to make money when it’s not at all clear which VNFs they can effectively sell to the volume part of the enterprise market as managed services, i.e. where they make money?

There is also increasing dissonance on the vendor side – and recently several vendors have openly voiced their frustrations on this point - virtualizing products requires investment and until NFV starts moving out of the labs into volume service deployments we all are investing money based on the hope of generating payback at some uncertain point in the future.

The other huge dissonance is that all the investment in next-generation virtualization efforts has eaten most of the IT resources for introducing new services and platforms for the operators mainstream existing business. The number of delayed service launches atoperators due to ‘IT budget issues’ or ‘lack of available personnel’ is now an industry-wide phenomenon and delaying cost reduction and business generation service initiatives in the market. This is ironic as virtualization is meant to accelerate service delivery, not stall it.

See full post

Hello from the Other Side: The Customer Benefits of NFV

hello_from_other_side

Operators should start their NFV charm offensive now says Pravin Mirchandani, CMO, EKINOPS.

Surprisingly few of today’s conversations about NFV address how a virtualized infrastructure will benefit end-user businesses. If operators and CSPs want to bring their customers across without resistance, however, they must also consider how to position the benefits of NFV, particularly since some may view its new technologies as an unsolicited risk.

Flexible provisioning is the big persuader. Most businesses experience peak traffic at infrequent and predictable times. Despite only needing this level of service occasionally, however, they have had no option but to sign contracts that deliver this maximum capacity all the time. Operators haven’t had the flexibility to deliver anything else. Virtualization can change all of this.

This is big news because firms of all shapes and sizes are caught in this trap. Pizza companies do 90% of their business on Friday and Saturday nights, yet provision for these peaks seven days a week. This is a huge overhead, particularly for a national networked outfit like Domino’s. Other businesses, particularly retail e-commerce sites, are governed by seasonality. The spikes in online sales triggered by Black Friday and the January sales are well documented. Hotels beef up their network capacity for four intensive months of high occupancy but, under today’s model, must provision equally for the quieter eight.

Virtualization can address this need. A software-defined infrastructure will give customers access to a portal through which they can self-select the services they need and specify when they need them. The programmatic benefits of virtualization enable operators to spin them up automatically. Compare that to the weeks of notice that an operator currently needs to install a new line and the end-user benefits come into stark focus.

See full post

The Song of the Sirens: Five ways to spot hidden NFV vendor lock-in

the-song-of-the-sirens-five-ways-to-spot-hidden-nfv-vendor-lock-in

One of the big attractions of NFV is that it gives operators a chance to break free of single vendor contracts and establish greater control over the future development of their networks.

Genuinely ‘open NFV’ gives operators the ability to change tack according to technical and commercial developments. It enables them to shop around for best of breed solutions and blend a mixture of, say, migration-oriented hybrid solutions with white or grey-box CPEs and connect them all to their choice of orchestrator. It also dramatically improves their negotiating power.

Yet, despite appearances, few NFV vendors practice ‘genuinely open NFV’ and instead disguise how they intend to close and lock the front door once their customer has stepped inside.

There are five common traps that vendors set for operators as they entice them toward ‘open NFV’ contracts:

#1 Charging for third-party connections

See full post

Diverse Reality - A View from the MWC 2017 Walkway

b2ap3_thumbnail_MWC2017

MWC is the biggest Telecoms event in the world and most of us were expecting the big theme this year to be 5G. Whilst 5G was present on many stands, it was upstaged by both a faster type of 4G and a phone from the past. The other takeaway from Barcelona’s mega tradefest was the huge diversity of people, exhibitors and themes on show. ‘Mobility’ may have been a connecting theme but only in the broadest sense.

Here are my impressions:

 Back to the Past

Samsung executives looked on aghast as Nokia, the European has-been of mobile phones stole the show with its latest or should I say throwback phone – the 3310. With 22 hours of talk-time and up to a month’s standby time and even a made-over Snake game, those of us that carry two or more portable battery chargers to get through the day couldn’t resist a nostalgic smile.

Gigabit LTE versus 5G

5G was on show on numerous devices but plastered over the billboards was the promise and reality of Gigabit LTE. Based on 4G technology and 4x4 MIMO, Gigabit-LTE is here, or at least it is in Australia we are told – other operators soon to launch. Expect only about 60Mb/s of real-life performance though.

See full post
Tags:

US Telcos Move Ahead on NFV as Europe Watches: 2017 Predictions

virtu2017

While huge progress has been made this year toward the development of virtualized services, a variety of challenges remain before widespread service roll-out can commence. 2017 will be the year that the industry takes positive steps to overcome these barriers, says Pravin Mirchandani, CMO at OneAccess Networks, who shares his five key predictions for the year ahead.

#1 Europe will stay on the sidelines as USA and Japan push on

We will see virtualized services start to roll out, principally in the USA, and to a lesser extent in Japan through NTT. While everyone remains sure that the way forward is virtualization, the industry is still figuring out how to solve the multiple technical, commercial and organizational challenges posed by migration.

Operators will be closely watching AT&T and its Domain 2.0 program and keeping an eye on Verizon too, in a bid to learn lessons about how to implement NFV. Europe’s hard-pressed operators, in particular, will mostly stay parked in ‘watch and learn’ mode, continuing with RFx, proof of concepts and trials. In fact, we’re unlikely to see any virtualized services roll out across Europe in 2017. Compelling business cases are harder to assemble in the European continent and, until these are squared away, operators will prefer to observe how US and Japanese trail-blazers facilitate service migration and preserve their choices – both major factors driving current and near-term investment decisions.

#2 NFV’s ROI equation will need to be cracked

See full post

Pushing through Glass Ceilings at the SDN World Congress 2016

imageSDNWorldCongress_TheHague_2016

Live from the SDN World Congress Show in The Hague, Pravin Mirchandani, CMO, EKINOPS, reflects on the industry challenges steering the dialogue at this year’s conference.

In Gartner’s hype cycle, there is an inevitable time of disillusionment that follows the initial excitement of a new technology. At the SDN World Congress this feels different: although we have probably passed the peak of inflated expectations, there is less a trough of disillusionment, rather a set of major impediments that need to be cleared away in order to achieve the nirvana of SDN/NFV. Most actors can see what needs to be done and are steadfastly supporting the initial objectives but my impression is that breaking through to achieve the goals of network virtualization is like pushing through the famous glass ceiling. Though not created by prejudice, as in the traditional definition of glass ceiling, the barriers are real and there are many.

Glass Ceiling 1: Complexity One of the goals of software-defined networking is to reduce dependence on an army of network experts, who are difficult to recruit and retain, expensive to hire and prone to error. What’s clear is that what they do is indeed complex; and converting their expertise and processes into automated software processes and APIs is equally if not more complex, as there is a distinct lack of established practices and field-proven code to draw upon. Many of the speakers at SDN World Congress mentioned the issue of complexity and this was a constant theme in the corridor discussions. Laurent Herr, VP of OSS at Orange Business Services stated that Orange estimated it would take 20,000 man-days to convert their tens of IT systems to achieve virtualization.

Glass Ceiling 2: Culture Another common theme was the issue of culture. Telcos have been organised to deliver the ‘procure-design-integrate-deploy’ cycle for new services and have a well-established set of linear processes and organizational silos to achieve it. Introducing virtualized services however requires a DevOps culture based on agility, fast failing (anathema to the internal cultures of Telcos) and rapid assembly of multi-skilled teams (especially collaboration between network and IT experts) to deliver new outcomes, frequently, fast and reliably. Achieving a DevOps culture was one of the most frequently cited challenges by the Telco speakers at the Congress. Another common word they used was transformation.

Glass Ceiling 3: Lack of Expertise It’s difficult to estimate the number of engineers that really understand the principles and practices of virtualization but they probably number in the low hundreds across the globe. Given the ability of the vendors to pay better salaries, it’s a safe bet that the majority work for them rather than for the Telcos. Growing this number is difficult as it requires combining IT, programming and network skills. Creating collaborative teams helps but finding or training people to achieve mastery of the different skills is a challenge for the whole industry. This was more of a corridor conversation rather than openly cited by the speakers but it is a glass ceiling nevertheless.

See full post

Fifty Shades of NFV?

fifty_shades_of_nfv

In the racy world of CPE architecture, what virtualization-hungry service providers say they want isn’t always what they need, says Pravin Mirchandani, CMO, EKINOPS.

Alright, perhaps ‘racy’ is going a bit far, but as the virtualization industry moves out of ‘does it work’ and into ‘let’s make it happen’, pulses are certainly starting to quicken. Not least because service providers are having to make tough calls about how to architect their management and orchestration (MANO). Many of these decisions revolve around the deployment of virtualized network functions (VNFs), via some form of customer premises equipment (CPE).

Several ‘shades’ are emerging, each with their advantages and drawbacks.

The ‘NETCONF-enabled CPE’ model emulates what we have today: a fixed number of physical network functions (note: not virtual) are embedded into a traditional L3 multi-service access router. The key difference here is that the router, as its name suggests, supports the NETCONF management protocol and can, as result, be managed in a virtualized environment. In truth, this is a pretty rudimentary form of virtualization; the router can be managed by a next-generation OSS with NETCONF and its embedded physical functions can be turned on and off remotely, but that’s about it. The device is not reprogrammable, nor can its network functions be removed or replaced with alternatives. The market for this deployment model lies in two use-cases: Firstly, as a bridging solution enabling service providers to co-operate traditional and virtualized network services simultaneously, facilitating migration. Secondly, given that many of today’s VNFs are heavy and need considerable amounts of memory and processing resources in order to operate, the more flexible white-box alternatives are costly in comparison. Specialist vendors like OneAccess have been developing dedicated CPE appliances (with embedded PNFs) for years, where compact and efficient code has always been a design goal in order to keep appliance costs under control. For more conservative operators that are keen to get ‘in the game’, the proven reliability and comparative cost efficiency of this model can offset its relatively limited flexibility. Rome wasn’t built in a day and some operators will prefer to nail the centralized management and orchestration piece before investing heavily in pure-play virtualization appliances for the network’s edge.

A purer approach is to invest in a ‘thick branch CPE’ or, in other words, an x86-based white-box solution running Linux, onto which VNF packages can be either pre-loaded and, in the future, removed and replaced or even selected by customers via, say, a web portal. This approach delivers far greater flexibility and is truer to the original promise of NFV, in which the network’s functions and components can be dismantled and recomposed in order to adjust a service offer. The snag, however is that white-box CPEs come at a cost. More memory and more processing power mean more cash. That’s why the race is on to develop compact VNFs, so they can minimize processing requirements and, as a result, enable a limited spec white-box to do more, with less. Again, unsurprisingly, those ahead of the curve are VNF vendors that have the experience of wringing every last drop of performance out of compact and cost-efficient appliances, purpose-designed for operators and service providers.

See full post

Thick & Thin: A Taxonomy of CPEs

taxonomy_cpe

In presentations at virtualization conferences, and in our discussions with operators and service providers, there remains a lot of confusion surrounding the terms ‘thick’ and ‘thin’ as they relate to customer premises equipment (CPE). This is because the terms are used interchangeably, to describe different market segments, the density of network functions as well as the nature of the CPE itself.

The roots of ‘thick’ and ‘thin’ comes from the term ‘thin client’; a popular reference to a lightweight computer or terminal that depends heavily on a server or server farm to deliver data processing and application support. This contrasts with the PC, which performs these roles independently, and was somewhat disparagingly referred to as a ‘fat client’, or, more neutrally, as a ‘thick client’.

This heritage is important as we look to provide a taxonomy of CPEs, which will hopefully aid our understanding of their respective roles in the delivery of virtualized network services.

Generically, CPE or ‘customer premises equipment’ refers to the equipment provided by a service provider that is then installed with its customers. Historically, CPE referred mainly to the supply of telephony equipment, but today the term encompasses a whole range of operator supplied equipment including routers, switches, voice gateways, set-top boxes as well as home networking adapters.

Thick CPE refers typically to a router or switch that provides network functions at the customer premises. There are now three main types:

See full post

Ultra-compact VNFs are the key to cost-effective NFV

ultra-compact-vnfs-are-the-key-to-cost-effective-nf

OneAccess’ CMO, Pravin Mirchandani, argues that only the most efficient VNFs will pass the scrutiny of the operator community; the risks and costs of accepting anything different are too high.

The arrival of the software-defined networking era has been enthusiastically welcomed by all as it promises both infrastructure and services that can flex and grow in line with operators’ changing requirements. The question how can we transition to an SDN/NFV based infrastructure as quickly possible? is occupying industry minds, service providers and vendors alike. Anyone with lingering doubts need only consider the recent A&M moves by the industry big guns, Juniper and Cisco, who are feverishly trying to reinvent themselves as born again software businesses.

Virtualized network functions (VNFs), delivered over the NFV infrastructure (NFVI), promise to minimize the operator investments needed to customize future services in line with their operational needs. But, at the moment, the jury is still out on what the killer VNFs are going to be. This question raises new concerns: what VNFs should operators plan for when specifying their white box? How will demand for resources play out over the business cycle? Here carriers are facing some tough decisions; ones that may ultimately determine their ability to compete in a crowded sector. An over-specified white box will waste huge amounts of money and already NFV migration is proving much more costly than first thought. Far worse though is the prospect of under-specification, which would result in a virtualized environment that simply isn’t fit for purpose.

The dilemma for the operators can’t be taken lightly. If they deploy basic bare metal units, the risk is lost revenue when customers, who cannot upgrade when needed, move to an alternative supplier. Most likely, a middle ground will be reached, and attention will refocus on the familiar question of how to get more for less. Those that thought that this question might evaporate as network goes software-centric should prepare for disappointment. Operators will be exerting great pressure on VNF developers to do just this, by creating ultra-compact and efficient software functions, not least, so their choice of white-box stands the best chance of coping with as-yet-unknown future demands.

There are many vendors aiming to position themselves in this space which, it seems, is where the long-term revenue opportunity exists. But if they want to deploy a full catalog of VNFS including functions such as WAN optimization, vCPE, VPN and encryption, for example, carriers need to be conscious that many developers hail from an enterprise background, in which their solutions have operated on dedicated appliances drawing on uncontested computing power. VNF development is a different ballgame altogether - so it will be interesting to see how these modules perform when they are scaled down to share the resources of a single white box.

See full post

MP-TCP link bonding protocol offers declining MPLS a much needed life-line

mp-tcp-link-bonding-protocol-offers-declining-mpls-a-much-needed-life-line

Recent work on a new approach to Hybrid Access emerging from one of the industry standards body is likely to be music to the ears of the major carriers who have seen a steady erosion of their market share by the more agile Internet service providers.

Despite the many benefits that an MPLS based VPN connection can offer businesses, particularly in terms of security and SLA guarantees, the major carriers have struggled to prevent customers opting to move some or all of their WAN architecture onto a low-cost, high-speed Internet link and VPN as soon as contracts allow.

To some extent this trend is an understandable consequence of the growth in the uptake of Cloud applications as the basis of corporate communications. Most network managers agree that MPLS is not ideally suited to handling large volumes of traffic and has led to network congestion and performance headaches for IT teams, for which just increasing bandwidth is not necessarily the solution. In addition there are still lots of businesses that choose a hybrid approach to hosting applications with some in the corporate data center and others in the Cloud. This means that moving to an all Internet infrastructure is not going to solve the problem in all cases either.

The obvious answer is to opt for some form of hybrid MPLS/Internet access ecosystem that can provide granular control of all traffic across the WAN and ensure that load can be spread across multiple links based on a range of business priorities and policies. While this approach is great in theory the reality can mean high capex investment in load balancing for a sub-optimal solution.

Application-aware policy-based traffic distribution means any given application/session is restricted to a single link’s bandwidth, which leads to the expensive MPLS being under-utilized without adding any link failure reliability. With cost-saving also a major factor for businesses looking for an alternative to MPLS it is easy to see why some companies decide to go for the pure-play Internet-based option, even at the cost of losing the reputed SLA and security benefits offered by MPLS.However, recent exciting developments emerging from the Broadband Forum are promising to enable advanced hybrid access functionality to be embedded in the CPE, which is great news for carriers looking to be able to offer customers the type of services they need while retaining their private corporate VPN at an acceptable price-point.

See full post

A Gig Ticket: A Chance for Operators to Grow the Market for 1Gbps L3 Services

a-gig-ticket-a-chance-for-operators-to-grow-the-market-for-1gbps-l3-services

Recent innovations in customer premises equipment (CPE) mean that operators can now bring 1Gbps Layer 3 connectivity to a much bigger market. Just in time, too, explains Pravin Mirchandani, CMO, OneAccess Networks.

Industry dialogue about ‘the race to 1Gbps’ has, until now, largely focused on the challenge of laying fiber and how operators might backhaul via ‘dark fiber’ laid in the dotcom boom.

Huge strides have been made. In the US, ultra-fast networking university collective, Gig.U., revealed last year that ‘scores of American communities are now deeply engaged in deploying ultra-fast networks’. And it’s no secret that forward thinking players like Google and AT&T are intent on hooking up America’s major cities to fiber networks. Across Europe, challenged by terrain, borders and a fragmented marketplace, all-fiber connectivity has been harder to achieve but, like the US, fiber to the premises rollouts are well underway in most major cities.

It’s a good job, too. As the world’s businesses continue to migrate into the Cloud, the global market’s appetite for 1Gbps Layer 3 connectivity is growing, fast. Business adoption of increasingly bandwidth-hungry cloud apps and services is driving up speed requirements and putting pressure on operators to democratize 1Gbps connectivity by offering service contracts to the masses of distributed enterprises and SMBs at price points they can afford.

In this effort, operators have faced an equipment challenge. Cost effective 1Gbps in Carrier Ethernet has been around for some time but, until now, application-oriented ‘Layer 3’ 1Gbps connectivity has remained exclusive to the enterprise HQ. This is largely because the customer premises equipment (CPE) capable of delivering 1Gbps Layer 3 services has been ill-suited to mass deployment by operators. Having been designed for the Enterprise HQ, it is disproportionately expensive, big, cumbersome to deploy and laden with ports and features that operators simply don’t need. As a result, ultra-fast connectivity ‘for the masses’ has been neither economically nor operationally viable.

See full post

Managing the machines: How operators can get ahead in M2M

managing-the-machines-how-operators-can-get-ahead-in-m2

The deployment of Machine to Machine (M2M) initiatives is generating new revenue opportunities for operators and communication service providers (CSPs). Pravin Mirchandani, CMO, OneAccess Networks, explains how innovative traffic management services, delivered via customer premises-based equipment, or CPE, can help them capitalize on these opportunities.

Vodafone’s third annual M2M Barometer survey has confirmed that businesses are embracing M2M technologies faster than ever before. Over a quarter (27 per cent) of all companies worldwide are now using connected technology to develop and grow their businesses. In particular, the retail sector, together with the healthcare, utilities and automotive industries are all moving to maximize M2M’s potential. The returns are substantial: 59 percent of early adopters reported a significant ROI on their M2M investment.

Despite the market buzz, many operators and CSPs are yet to zero in on the most profitable and operationally efficient way to support this new wave of industrialized connectivity. Not least because the range of possible M2M use cases is vast. The diversity of devices being connected, their whereabouts, the conditions in which they operate and the amount of data they produce all impact on the CSP’s choice of supporting network equipment. One key commonality, however, is that all deployments require a connectivity infrastructure capable of aggregating, securing and backhauling M2M data in a cost-effective, fast and reliable manner.

As the number of connected devices skyrockets, the ability to offer a range of traffic management services will be a clincher for operators and CSPs looking to gain a foothold in this market and differentiate their offerings. The good news is that many of these can now be delivered via the CPE, without the need for additional devices. Establishing always-on connectivity is of course vital, but the ability to provide a robust business continuity failover to LTE could also prove attractive to customers for whom any amount of network downtime is harmful, no matter how small. Network monitoring and dynamic traffic routing software managed via the CPE can also be used to support traffic throughput at peak load times.

Before the M2M market can reach true maturity, however, fears relating to data protection and security must be assuaged. Given the limited processing power of M2M’s connecting sensors – which are incapable of performing heavy duty computational functions such as encryption – the opportunity here is in the hands of CSPs and, again, the CPE can help.

See full post

Is integrating SBC functionality into the router the way forward for accelerating the availability of SIP trunking services?

is-integrating-sbc-functionality-into-the-router-the-way-forward-for-accelerating-the-availability-of-sip-trunking-services

Although many businesses have deployed IP-based PBX systems to handle their corporate telephony needs, so far relatively few have then taken the next step to a full SIP trunking service particularly in Europe.

To some extent this can be explained by the “if it is not broken – why fix it” approach but is actually more to do with service providers looking to leverage maximum return from their substantial infrastructure investments and disincentivizing customers who may want to transition from their still lucrative ISDN connections.

However, many companies are now re-assessing the merits of integrating their video, data and voice requirements in a unified communications (UC) package opening up new opportunities as well as challenges for service providers. With many of these enterprises sensibly opting for a phased transition to an all-IP UC platform to avoid potential costly business disruption, CSPs are faced with connecting SIP trunks into an array of IP and legacy PBX and mixed PSTN/IP voice environments.

For Telcos and service providers this means facing an array of non-standard SIP trunk implementations, involving a mix of old and new technologies, that can result in increased operational expenditure combined with reduced revenue potential. Given this double-whammy effect it is understandable why they tend to be less than enthusiastic in actively promoting an end-to-end IP telephony platform. For that reason, TDM trunks has remained the preferred demarcation line of choice for service providers, even though TDM is converted to VoIP within their network.

However, there now seems to be an increasing momentum and growing market demand for SIP trunking services from enterprises. A recent Infonetics1 research report forecasts growth in the business adoption of UC and VoIP services to reach $35bn by 2018. Part of the report showed a massive, 50% increase in SIP trunking in the US in 2013, with similar growth in EMEA expected to follow in 2014 and beyond.

See full post

Thinking ahead: Eradicating downtime needs strategic vision as well as technology

thinking-ahead-eradicating-downtime-needs-strategic-vision-as-well-as-technology

Organizations focused on delivering network connectivity services, from big national telcos to small, regional service providers, are under pressure to reduce network downtime. A recent report from IHS Research(1) highlighted that US organizations are losing as much as $100m per year to the problem. It seems to be a similar story in Europe, too, where network outages are estimated to be costing companies an average of €75.5k per year(2).

Meanwhile, business technologies are evolving at a blistering pace, raising the stakes even further. The steady march into the Cloud, together with the rise of enterprise mobility are increasing network traffic and deepening the enterprise’s dependency on the network. Looking not so far ahead, the surge of un-manned connections from machine-to-machine (M2M) initiatives is set to compound matters. Against this backdrop, communication service providers (CSPs) are making it their mission to futureproof their networks so they can maintain network stability, speed and security as their customer’s demands intensify.

Organizations that manage multiple branches across dispersed geographic locations, like hotel chains, petrol stations and retailers, have multifaceted dependencies. Here, network performance outages can halt the business in its tracks, severing the link through which customers engage, card payments are verified and the supply chain is managed.

One such organization demonstrating ‘best practice’ in network management is Tokheim, a global managed service provider specializing in the retail oil and gas industry, whose purpose-built international network connects 5000 petrol service station customers worldwide. The success of Tokheim’s business rests on the quality of its network.

In 2013, with the future in mind, Tokheim set about evolving its business-critical network to ensure that it could support the fast growing, always-on transaction environment required by its network of branches To meet performance requirements, Tokheim centralized the application infrastructure management functions needed to monitor the variety of networked point-of-sales (POS) devices deployed on its customers’ forecourts. This upgrade delivers a faster, more reliable payment experience to its customers, supporting its efforts to increase market share. Importantly, the organization also delivered PCI-DSS compliant POS connectivity for its real-time transaction processing and, to minimise the risk of network down time, integrated a number of backup options including 3G and secure VPN remote access to networked locations worldwide.

See full post

Tomorrow’s CPE: the Wimbledon of network virtualization?

tomorrow-s-cpe-the-wimbledon-of-network-virtualization

Despite the industry’s charge toward network virtualization, the need for customers to connect their routers to non-Ethernet legacy connections is not going away. Couple this with the fact that a bunch of emerging network functions require an on-prem appliance, and the virtualized ‘CPE of the future’ starts to feel, well, really rather physical. So, is the CPE the Wimbledon of the network; ever-present, resistant to change, but perhaps also capable of surprising us all with its innovations?

Take Wimbledon’s white dress code, for example; a deeply entrenched tradition that has become a defining characteristic of the tournament. But in recent years, however, the dress discipline has been partially relaxed. Today, the tournament accommodates at least some expressions of color. Similarly, the majority of CPE appliances that today deliver network connectivity and voice gateway functions are specialized devices, and will stoically remain so for the next few years. It’s just too expensive to do otherwise, until fiber with G.fast as a short-haul copper Ethernet extension become ubiquitous and all voice terminals are IP-based. Out of necessity, therefore, incumbent local exchange carriers (ILECs) will have little option but to support this CPE model. In other words, it looks like the traditionalists, both at the tennis and on the network, can rest easy. For now, at least.

But pressure to change is mounting. Competitive local exchange carriers (CLECs), together with alternative network operators, are more agile and, since they can target Ethernet-only network connections, can move more quickly to a vCPE approach. That said, some network functions will need to remain ‘on premise’, namely link management, service demarcation and service assurance. The network functions that can migrate to the virtualized center will do so over time. In our Wimbledon analogy, this equates to another tournament altogether, played on a far more contemporary surface than Wimbledon’s time-honoured grass. Competition indeed for the ‘historic home of tennis’.

The need for some functions to remain on premise means that the CPE will increasingly comprise hybrid devices – ones that support both traditional network functions and those located in a centralized and virtualized core. Incidentally, this won’t be just a single data center, but rather a set of distributed virtualized centers located with the network infrastructure (most likely at POPs) to mitigate traffic tromboning.

The huge IT challenge of accommodating virtualized delivery of services mean that the CPE will also need to become a multi-tongued device able to speak next-generation protocols – Netconf, Openflow – as well as traditional CLI, TR-069 and SNMP. It seems inevitably that that, after holding out for as long as they can, traditionalists at both Wimbledon and in the CPE, will be forced to accept some variations, but only within ‘proper’ limits of course!

See full post

Latest News

  • EKINOPS Opens New US Office

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of optical transport equipment and router solutions for service providers and telecom operators, today announces the opening of new North America Headquarters in Rockville, Maryland, USA.

     
  • EKINOPS and Passman meet demands of “bandwidth hungry” hospitality customers

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading provider of open and fully interoperable Layer 1, 2 and 3 network solutions, and international hospitality digital service specialists, Passman, today announced the availability of true 1Gb services-enabling routers, which will further enhance quality of service (QoS) delivery for Wi-Fi guest access services.

     
  • EKINOPS Centralizes Metro Ethernet SLA Monitoring & Service Activation for CSPs with new 10G Access Device

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading global supplier of telecommunications solutions for operators, today unveils a OneAccess 10G Ethernet Access Device (EAD) that will enable operators and communication service providers to offer high-speed Ethernet services to Enterprise and wholesale customers.

     

EKINOPS Worldwide

EKINOPS EMEA & APAC
Telephone +33 (0)1 77 71 12 00

EKINOPS AMERICAS
Telephone +1 (571) 385-4103

 

E-MAIL ALERTS

Receive automatically EKINOPS information in your inbox!