Menu

EKINOPS en français

WELCOME TO

EKINOPS BLOG

Why the time to move to NETCONF/YANG is now

time_to_move_netconf

With backing from major NFV/SDN projects and industry organizations, NETCONF, the client/server protocol designed to configure network devices more clearly and effectively, will soon become ubiquitous. Operators who migrate to NETCONF can both future-proof their operations for NFV and can also reap some of short-term benefits of automation, today.

Why NETCONF/YANG?

By addressing the shortcomings of existing network configuration protocols like Simple Network Management Protocol (SNMP) and Command Line Interface (CLI), NETCONF was developed to enable more efficient and effective network management. SNMP has long been cast off by operators and CSPs as hopelessly complicated and difficult to decipher. CLI may be readable by a network engineer, but it is prone to human error and can lead to vendor lock-in, since propriertary implementations often mean that only one vendor’s element management system can manage their network elements.

NETCONF, on the other hand, is designed specifically with programmability in mind, making it perfect for an automated, software-based environment. It enables a range of functions to be delivered automatically in the network, while maintaining flexibility and vendor independence (by removing the network’s dependence on device-specific CLI scripts). NETCONF also offers network management that is not only human readable, but also supports operations like transaction-based provisioning, querying, editing and deletion of configuration data.

YANG is the telecom-specific modelling language that makes NETCONF useful, by describing device configuration and state information that can be transported via the NETCONF protocol. The configuration is plain text and human-readable, plus it’s easy to copy and paste and compare between devices and services. Together, NETCONF and YANG can deliver a thus far elusive mix of predictability and automation.

NFV needs NETCONF

What’s even more powerful is that NETCONF and YANG combined offer the flexibility needed to manage both virtual and physical devices. This means that operators can get going with NETCONF now, before they start ripping out old devices and replacing them with white boxes. This is a necessary investment in the future; as we look forward to a virtualized networking environment, where network functions are spun up and changed continuously, the high level of automation enabled by NETCONF/YANG is not just preferable, it is essential.

See full post

Acceleration techniques for white-box CPEs

acceleration-techniques-for-white-box-cpe

his blog will provide a quick introduction and comparison of some of the available acceleration technologies common on white-box CPE, sometimes also referred to as “universal CPE” or uCPE.

Classical premises equipment has traditionally relied on specialized network processors to deliver network processing performance. Standard x86 hardware however, which was originally designed for more general purpose compute tasks, especially when used together with a “plain vanilla” Linux implementation, will result in disappointing performance levels for data communication purposes unless expensive x86 CPUs are used. To address this concern, a number of software and hardware acceleration techniques have been introduced to meet the performance requirements imposed on today’s CPEs.

The processing context for white-box CPE is an environment that provides a small virtualized infrastructure at the customer premises, where multiple VMs (Virtual Machines) that host VNFs (Virtual Network Functions) are created and hosted in a Linux environment, Service chaining is established between the different VMs, resulting in a final customer service which can be configured and adapted through a customer portal. In this setup VNFs and VMs need to communicate either to the outside world through a NIC (Network Interface Card) or to another VNF for service chaining.

DPDK (Data Plane Development Kit)
In the case of white-box CPEs, DPDK provides a framework and set of techniques to accelerate data packet processing and circumvent the bottlenecks encountered in standard Linux processing. DPDK is implemented in software and basically bypasses the Linux kernel and network stack to establish a high-speed data path for rapid packet processing. Its great advantage is to produce significant performance improvements without hardware modifications. Although DPDK was originally developed for Intel-based processor environments, it is now also available on other processors such as ARM.

AES-NI (Advanced Encryption Standard New Instructions)

This is an extension to the x86 instruction set for Intel processors to accelerate the speed of encrypting and decrypting data packets using the AES standard. Without this instruction set, the encryption and decryption process would take a lot more time since it is a very compute-intensive task. The encryption is done at the data plane level and is used to secure data communications over Wide Area Networks.

See full post

US Telcos Move Ahead on NFV as Europe Watches: 2017 Predictions

virtu2017

While huge progress has been made this year toward the development of virtualized services, a variety of challenges remain before widespread service roll-out can commence. 2017 will be the year that the industry takes positive steps to overcome these barriers, says Pravin Mirchandani, CMO at OneAccess Networks, who shares his five key predictions for the year ahead.

#1 Europe will stay on the sidelines as USA and Japan push on

We will see virtualized services start to roll out, principally in the USA, and to a lesser extent in Japan through NTT. While everyone remains sure that the way forward is virtualization, the industry is still figuring out how to solve the multiple technical, commercial and organizational challenges posed by migration.

Operators will be closely watching AT&T and its Domain 2.0 program and keeping an eye on Verizon too, in a bid to learn lessons about how to implement NFV. Europe’s hard-pressed operators, in particular, will mostly stay parked in ‘watch and learn’ mode, continuing with RFx, proof of concepts and trials. In fact, we’re unlikely to see any virtualized services roll out across Europe in 2017. Compelling business cases are harder to assemble in the European continent and, until these are squared away, operators will prefer to observe how US and Japanese trail-blazers facilitate service migration and preserve their choices – both major factors driving current and near-term investment decisions.

#2 NFV’s ROI equation will need to be cracked

See full post

ROI for white-box CPEs: a question of segmentation

ROI

Operators commonly expect that moving to SDN and NFV will enable cost reductions, through efficiency gains in network management and device consolidation. As they get closer to deployment, however, commercial reality is telling a different story, depending on the market segment being addressed.

One area where the ROI for white-box CPEs is easy to justify is appliance consolidation. If you can consolidate a number of proprietary appliances into a single white-box CPE then the CAPEX savings are clear. Chaining, for instance, a vRouter, a WAN optimizer and a next-generation firewall into one single x86 appliance (i.e. a white-box CPE) delivers immediately identifiable cost savings: one appliance instead of three is basically the formula and this is a commonly targeted combination. Combine this with the prospect of increased wallet share from large enterprises, which often run their networks themselves in do-it-yourself mode, and the large enterprise segment looks increasingly attractive for operators.

Let’s be clear, though: this is just a large enterprise play. SMBs and multi-site deployments for government or highly distributed organizations have no need for WAN optimization and little need for a next-generation firewall; the on-board firewall that comes with their router together with a PC-based anti-virus subscription and email antispam service are usually sufficient. As a result, anyone working on building business cases for white-box CPEs for the volume part of the market will attest that ROI a tough nut to crack.

The draw for this market segment is the potential to increase ARPU by making it easier and more flexible to use additional services through automated service delivery via virtualization.

In terms of hardware CAPEX, the cost of white-box CPE deployment outstrips that of traditional CPEs. For the large enterprise segment which often deploys multiple appliances, this cost increase is compensated by reducing the number of appliances. For other market segments, where a single CPE is more typically deployed, savings need to come from OPEX reductions or TCO savings. The latter, however, is notoriously difficult to calculate and is usually irrelevant in a context where staff reductions are difficult to achieve, particularly in a period of technology transition.

See full post

Pushing through Glass Ceilings at the SDN World Congress 2016

imageSDNWorldCongress_TheHague_2016

Live from the SDN World Congress Show in The Hague, Pravin Mirchandani, CMO, EKINOPS, reflects on the industry challenges steering the dialogue at this year’s conference.

In Gartner’s hype cycle, there is an inevitable time of disillusionment that follows the initial excitement of a new technology. At the SDN World Congress this feels different: although we have probably passed the peak of inflated expectations, there is less a trough of disillusionment, rather a set of major impediments that need to be cleared away in order to achieve the nirvana of SDN/NFV. Most actors can see what needs to be done and are steadfastly supporting the initial objectives but my impression is that breaking through to achieve the goals of network virtualization is like pushing through the famous glass ceiling. Though not created by prejudice, as in the traditional definition of glass ceiling, the barriers are real and there are many.

Glass Ceiling 1: Complexity One of the goals of software-defined networking is to reduce dependence on an army of network experts, who are difficult to recruit and retain, expensive to hire and prone to error. What’s clear is that what they do is indeed complex; and converting their expertise and processes into automated software processes and APIs is equally if not more complex, as there is a distinct lack of established practices and field-proven code to draw upon. Many of the speakers at SDN World Congress mentioned the issue of complexity and this was a constant theme in the corridor discussions. Laurent Herr, VP of OSS at Orange Business Services stated that Orange estimated it would take 20,000 man-days to convert their tens of IT systems to achieve virtualization.

Glass Ceiling 2: Culture Another common theme was the issue of culture. Telcos have been organised to deliver the ‘procure-design-integrate-deploy’ cycle for new services and have a well-established set of linear processes and organizational silos to achieve it. Introducing virtualized services however requires a DevOps culture based on agility, fast failing (anathema to the internal cultures of Telcos) and rapid assembly of multi-skilled teams (especially collaboration between network and IT experts) to deliver new outcomes, frequently, fast and reliably. Achieving a DevOps culture was one of the most frequently cited challenges by the Telco speakers at the Congress. Another common word they used was transformation.

Glass Ceiling 3: Lack of Expertise It’s difficult to estimate the number of engineers that really understand the principles and practices of virtualization but they probably number in the low hundreds across the globe. Given the ability of the vendors to pay better salaries, it’s a safe bet that the majority work for them rather than for the Telcos. Growing this number is difficult as it requires combining IT, programming and network skills. Creating collaborative teams helps but finding or training people to achieve mastery of the different skills is a challenge for the whole industry. This was more of a corridor conversation rather than openly cited by the speakers but it is a glass ceiling nevertheless.

See full post

Thick & Thin: A Taxonomy of CPEs

taxonomy_cpe

In presentations at virtualization conferences, and in our discussions with operators and service providers, there remains a lot of confusion surrounding the terms ‘thick’ and ‘thin’ as they relate to customer premises equipment (CPE). This is because the terms are used interchangeably, to describe different market segments, the density of network functions as well as the nature of the CPE itself.

The roots of ‘thick’ and ‘thin’ comes from the term ‘thin client’; a popular reference to a lightweight computer or terminal that depends heavily on a server or server farm to deliver data processing and application support. This contrasts with the PC, which performs these roles independently, and was somewhat disparagingly referred to as a ‘fat client’, or, more neutrally, as a ‘thick client’.

This heritage is important as we look to provide a taxonomy of CPEs, which will hopefully aid our understanding of their respective roles in the delivery of virtualized network services.

Generically, CPE or ‘customer premises equipment’ refers to the equipment provided by a service provider that is then installed with its customers. Historically, CPE referred mainly to the supply of telephony equipment, but today the term encompasses a whole range of operator supplied equipment including routers, switches, voice gateways, set-top boxes as well as home networking adapters.

Thick CPE refers typically to a router or switch that provides network functions at the customer premises. There are now three main types:

See full post

The White-Box CPE: Separating Myths from Reality

the-white-box-cpe-separating-myths-from-reality

In the world of customer premises equipment (CPE), the white-box is a new idea. It should then come as no surprise that misconceptions among operators and CSPs about what constitutes the white-box CPE are common. Here are four of the most prevalent.

Myth #1: The white-box CPE is just commodity hardware.

No. The software-hosting platform is the most important part! Google and Apple have succeeded with smart phones because they created killer platforms, carefully designed to provide a secure environment ready for third party developers to exploit, which enabled their apps base to proliferate. The white-box CPE is no different. Don’t get me wrong, it is still all about using commodity hardware, but this is just the tip of the iceberg. It’s enabling potential stems from the software platform that controls the hardware, not from the hardware itself.

The same goes for the network functions running on the white-box CPE. They need to be installed, chained, activated, run, provisioned, monitored, charged and, of course, done so in a secure manner with maximum efficiency and predictability.
We’re not talking about any old software here, either. This is highly specialist, instance-specific software but unlike smartphones the functions are often dependent on each other. There are lots of open-source components that can carry out each of these tasks individually, but they also need to be integrated, managed with homogeneous, standardized APIs and provided with pre-packaged, documented use cases to facilitate their integration. We often see service providers adopting a DIY approach. But this only lasts until they realize the extent of work and the depth of know-how required to assemble all these pieces together. Building a demonstrator is one thing; warrantying the operational lifecycle over the lifetime of a service is another.

Myth #2: White-box CPEs are for everyone.

The whole idea of white-box CPEs is to foster the ability to take VNFs from multiple vendors and have the freedom to mix-and-match them according to the service provider’s desired functions, price and brand.
This is all good ‘on paper’. The reality, however, is different. Just like when specifying additional options on a car, the bill soon adds up. In fact, imagine being able to walk into a VW dealer and demand options from BMW, Mercedes, Honda and Ford. Welcome to the world of the white-box CPE!

Large enterprise can afford it because, right now, they are buying single-use network appliances and stacking them up in the customer’s premises. The white-box CPE’s promise of appliance consolidation is so great that the economics allow it to be expensive.

See full post

Virtualization means changing the approach to proof-of-concept projects for both Carriers and Vendors

virtualization-means-changing-the-approach-to-proof-of-concept-projects-for-both-carriers-and-vendor

OneAccess CTO, Antoine Clerget argues that vendors need to radically re-think their approach to PoC projects as carriers begin to plan their transition to software-defined network functions.

Until quite recently, when Telcos wanted to evaluate the different vendor’s technologies needed to build out new service platforms, the process was relatively straightforward. Typically, it meant plugging a box into the lab network and running the relevant functional and performance tests and then, assuming that the results were acceptable, handing things over to the commercial and legal teams to thrash out the supply contracts. Well, perhaps a bit more complicated than that but nevertheless a far simpler proposition than the one today’s engineers face when demonstrating or assessing NFV products.

Unlike when the CPE router came as a discrete device with a range of physical components on board, today individual network functions are de-composed into discrete elements and then stitched together in the virtualization infrastructure. In the new NFV environment, resources are to some extent shared with other (potentially third party) VNFs and the underlying infrastructure and VNFs run on some hardware unknown to the VNF vendor. As a consequence, moving from independent elements to a complete solution, even if it sits in a single hardware equipment, requires new types of integration skills. This means that a new and different approach is needed with a key focus being on integration, particularly on how management, functional and infrastructure elements work together in an environment where there are still a lot of unknowns.

After a remarkably short period of technical debate between the various actors in the network industry, we are now seeing a definite upswing in interest in testing the claims of SDN and NFV technologies in terms of genuine PoCs. This is especially true among those Telcos that are looking to future-proof their network infrastructures to achieve their major goals of flexibility for market differentiation and programmability for reducing costs.

As an early champion of the move to a more white-box /VNF approach to CPE architecture we see this as a natural progression, building on our existing multi-functional router platforms, which already include an extensive range of software modules for specific network functions and security. However, at the same time this has meant a total re-think on what is needed for a PoC project to be successful. With more emphasis on the need to answer questions about the interoperability of the technology in this new and highly dynamic virtualized environment, it means that our engineering teams need to take a much more direct, hands-on involvement in the process than was previously the case.

See full post

MP-TCP link bonding protocol offers declining MPLS a much needed life-line

mp-tcp-link-bonding-protocol-offers-declining-mpls-a-much-needed-life-line

Recent work on a new approach to Hybrid Access emerging from one of the industry standards body is likely to be music to the ears of the major carriers who have seen a steady erosion of their market share by the more agile Internet service providers.

Despite the many benefits that an MPLS based VPN connection can offer businesses, particularly in terms of security and SLA guarantees, the major carriers have struggled to prevent customers opting to move some or all of their WAN architecture onto a low-cost, high-speed Internet link and VPN as soon as contracts allow.

To some extent this trend is an understandable consequence of the growth in the uptake of Cloud applications as the basis of corporate communications. Most network managers agree that MPLS is not ideally suited to handling large volumes of traffic and has led to network congestion and performance headaches for IT teams, for which just increasing bandwidth is not necessarily the solution. In addition there are still lots of businesses that choose a hybrid approach to hosting applications with some in the corporate data center and others in the Cloud. This means that moving to an all Internet infrastructure is not going to solve the problem in all cases either.

The obvious answer is to opt for some form of hybrid MPLS/Internet access ecosystem that can provide granular control of all traffic across the WAN and ensure that load can be spread across multiple links based on a range of business priorities and policies. While this approach is great in theory the reality can mean high capex investment in load balancing for a sub-optimal solution.

Application-aware policy-based traffic distribution means any given application/session is restricted to a single link’s bandwidth, which leads to the expensive MPLS being under-utilized without adding any link failure reliability. With cost-saving also a major factor for businesses looking for an alternative to MPLS it is easy to see why some companies decide to go for the pure-play Internet-based option, even at the cost of losing the reputed SLA and security benefits offered by MPLS.
However, recent exciting developments emerging from the Broadband Forum are promising to enable advanced hybrid access functionality to be embedded in the CPE, which is great news for carriers looking to be able to offer customers the type of services they need while retaining their private corporate VPN at an acceptable price-point.

See full post

A Gig Ticket: A Chance for Operators to Grow the Market for 1Gbps L3 Services

a-gig-ticket-a-chance-for-operators-to-grow-the-market-for-1gbps-l3-services

Recent innovations in customer premises equipment (CPE) mean that operators can now bring 1Gbps Layer 3 connectivity to a much bigger market. Just in time, too, explains Pravin Mirchandani, CMO, OneAccess Networks.

Industry dialogue about ‘the race to 1Gbps’ has, until now, largely focused on the challenge of laying fiber and how operators might backhaul via ‘dark fiber’ laid in the dotcom boom.

Huge strides have been made. In the US, ultra-fast networking university collective, Gig.U., revealed last year that ‘scores of American communities are now deeply engaged in deploying ultra-fast networks’. And it’s no secret that forward thinking players like Google and AT&T are intent on hooking up America’s major cities to fiber networks. Across Europe, challenged by terrain, borders and a fragmented marketplace, all-fiber connectivity has been harder to achieve but, like the US, fiber to the premises rollouts are well underway in most major cities.

It’s a good job, too. As the world’s businesses continue to migrate into the Cloud, the global market’s appetite for 1Gbps Layer 3 connectivity is growing, fast. Business adoption of increasingly bandwidth-hungry cloud apps and services is driving up speed requirements and putting pressure on operators to democratize 1Gbps connectivity by offering service contracts to the masses of distributed enterprises and SMBs at price points they can afford.

In this effort, operators have faced an equipment challenge. Cost effective 1Gbps in Carrier Ethernet has been around for some time but, until now, application-oriented ‘Layer 3’ 1Gbps connectivity has remained exclusive to the enterprise HQ. This is largely because the customer premises equipment (CPE) capable of delivering 1Gbps Layer 3 services has been ill-suited to mass deployment by operators. Having been designed for the Enterprise HQ, it is disproportionately expensive, big, cumbersome to deploy and laden with ports and features that operators simply don’t need. As a result, ultra-fast connectivity ‘for the masses’ has been neither economically nor operationally viable.

See full post

Managing the machines: How operators can get ahead in M2M

managing-the-machines-how-operators-can-get-ahead-in-m2

The deployment of Machine to Machine (M2M) initiatives is generating new revenue opportunities for operators and communication service providers (CSPs). Pravin Mirchandani, CMO, OneAccess Networks, explains how innovative traffic management services, delivered via customer premises-based equipment, or CPE, can help them capitalize on these opportunities.

Vodafone’s third annual M2M Barometer survey has confirmed that businesses are embracing M2M technologies faster than ever before. Over a quarter (27 per cent) of all companies worldwide are now using connected technology to develop and grow their businesses. In particular, the retail sector, together with the healthcare, utilities and automotive industries are all moving to maximize M2M’s potential. The returns are substantial: 59 percent of early adopters reported a significant ROI on their M2M investment.

Despite the market buzz, many operators and CSPs are yet to zero in on the most profitable and operationally efficient way to support this new wave of industrialized connectivity. Not least because the range of possible M2M use cases is vast. The diversity of devices being connected, their whereabouts, the conditions in which they operate and the amount of data they produce all impact on the CSP’s choice of supporting network equipment. One key commonality, however, is that all deployments require a connectivity infrastructure capable of aggregating, securing and backhauling M2M data in a cost-effective, fast and reliable manner.

As the number of connected devices skyrockets, the ability to offer a range of traffic management services will be a clincher for operators and CSPs looking to gain a foothold in this market and differentiate their offerings. The good news is that many of these can now be delivered via the CPE, without the need for additional devices. Establishing always-on connectivity is of course vital, but the ability to provide a robust business continuity failover to LTE could also prove attractive to customers for whom any amount of network downtime is harmful, no matter how small. Network monitoring and dynamic traffic routing software managed via the CPE can also be used to support traffic throughput at peak load times.

Before the M2M market can reach true maturity, however, fears relating to data protection and security must be assuaged. Given the limited processing power of M2M’s connecting sensors – which are incapable of performing heavy duty computational functions such as encryption – the opportunity here is in the hands of CSPs and, again, the CPE can help.

See full post

Is integrating SBC functionality into the router the way forward for accelerating the availability of SIP trunking services?

is-integrating-sbc-functionality-into-the-router-the-way-forward-for-accelerating-the-availability-of-sip-trunking-services

Although many businesses have deployed IP-based PBX systems to handle their corporate telephony needs, so far relatively few have then taken the next step to a full SIP trunking service particularly in Europe.

To some extent this can be explained by the “if it is not broken – why fix it” approach but is actually more to do with service providers looking to leverage maximum return from their substantial infrastructure investments and disincentivizing customers who may want to transition from their still lucrative ISDN connections.

However, many companies are now re-assessing the merits of integrating their video, data and voice requirements in a unified communications (UC) package opening up new opportunities as well as challenges for service providers. With many of these enterprises sensibly opting for a phased transition to an all-IP UC platform to avoid potential costly business disruption, CSPs are faced with connecting SIP trunks into an array of IP and legacy PBX and mixed PSTN/IP voice environments.

For Telcos and service providers this means facing an array of non-standard SIP trunk implementations, involving a mix of old and new technologies, that can result in increased operational expenditure combined with reduced revenue potential. Given this double-whammy effect it is understandable why they tend to be less than enthusiastic in actively promoting an end-to-end IP telephony platform. For that reason, TDM trunks has remained the preferred demarcation line of choice for service providers, even though TDM is converted to VoIP within their network.

However, there now seems to be an increasing momentum and growing market demand for SIP trunking services from enterprises. A recent Infonetics1 research report forecasts growth in the business adoption of UC and VoIP services to reach $35bn by 2018. Part of the report showed a massive, 50% increase in SIP trunking in the US in 2013, with similar growth in EMEA expected to follow in 2014 and beyond.

See full post

Tomorrow’s CPE: the Wimbledon of network virtualization?

tomorrow-s-cpe-the-wimbledon-of-network-virtualization

Despite the industry’s charge toward network virtualization, the need for customers to connect their routers to non-Ethernet legacy connections is not going away. Couple this with the fact that a bunch of emerging network functions require an on-prem appliance, and the virtualized ‘CPE of the future’ starts to feel, well, really rather physical. So, is the CPE the Wimbledon of the network; ever-present, resistant to change, but perhaps also capable of surprising us all with its innovations?

Take Wimbledon’s white dress code, for example; a deeply entrenched tradition that has become a defining characteristic of the tournament. But in recent years, however, the dress discipline has been partially relaxed. Today, the tournament accommodates at least some expressions of color. Similarly, the majority of CPE appliances that today deliver network connectivity and voice gateway functions are specialized devices, and will stoically remain so for the next few years. It’s just too expensive to do otherwise, until fiber with G.fast as a short-haul copper Ethernet extension become ubiquitous and all voice terminals are IP-based. Out of necessity, therefore, incumbent local exchange carriers (ILECs) will have little option but to support this CPE model. In other words, it looks like the traditionalists, both at the tennis and on the network, can rest easy. For now, at least.

But pressure to change is mounting. Competitive local exchange carriers (CLECs), together with alternative network operators, are more agile and, since they can target Ethernet-only network connections, can move more quickly to a vCPE approach. That said, some network functions will need to remain ‘on premise’, namely link management, service demarcation and service assurance. The network functions that can migrate to the virtualized center will do so over time. In our Wimbledon analogy, this equates to another tournament altogether, played on a far more contemporary surface than Wimbledon’s time-honoured grass. Competition indeed for the ‘historic home of tennis’.

The need for some functions to remain on premise means that the CPE will increasingly comprise hybrid devices – ones that support both traditional network functions and those located in a centralized and virtualized core. Incidentally, this won’t be just a single data center, but rather a set of distributed virtualized centers located with the network infrastructure (most likely at POPs) to mitigate traffic tromboning.

The huge IT challenge of accommodating virtualized delivery of services mean that the CPE will also need to become a multi-tongued device able to speak next-generation protocols – Netconf, Openflow – as well as traditional CLI, TR-069 and SNMP. It seems inevitably that that, after holding out for as long as they can, traditionalists at both Wimbledon and in the CPE, will be forced to accept some variations, but only within ‘proper’ limits of course!

See full post

Two killer forces shaping the future of the CPE

ThefutureoftheCPEblogpartone

Powerful forces are steering the development of the CPE, explains Pravin Mirchandani, CMO at service-enabling network access specialist, OneAccess.

As the telecoms industry continues to hack a path toward network virtualization, the terms used to describe future customer premises equipment (CPE) are under almost continuous review. ‘White box’, ‘virtual CPE’ (vCPE) and ‘physical CPE’ (pCPE) each represent their own specific and shifting vision of how the network functions present in today’s CPE will be virtualized. But beneath the jargon, two powerful forces are steering the technology’s development.

1. The need to support non-Ethernet legacy connections

Ethernet is the assumed and, by and large, the only connectivity option for a low-cost white box approach, yet it is far from ubiquitously available as a WAN connectivity option at the customer premises. What’s more, the cost of increasing Ethernet coverage for connecting customer premises (typically by fiber) is growing as the lower cost, high-density deployment options become exhausted. Consequently, one of the key issues that virtualization faces is the need to support legacy connections between TDM-based PBX, alarm and other serial connections to various types of DSL-based WAN access technologies. This means that the bridging technology - the purpose-designed CPE - will be around for some time, especially for network connectivity devices and voice gateways.

2. To work, some functions need to be on the network’s edge

As the guy responsible for products at an access platform CPE vendor, what strikes me about our current work plan and roadmap is the huge amount of additional functionality that our CSP and MSP customers are asking us to deliver in our current-generation CPE. These include link management schemes for failover, bonding and offload; as well as shaping and event-based schemes, to ensure that business-critical Cloud-based applications flow regardless of the state of the network. Additional measurement capability is also being demanded, to remotely diagnose issues and ensure that SLAs are met. Security-hardening is also a request. The list goes on. By their nature, these types of intelligent functions have to reside in the CPE; you can’t failover, offload or measure local service levels remotely from the Cloud.

Given that you can’t economically ‘white-box’ legacy connectivity requirements, nor can you centralize network functions that rightly belong on the customer premises, only part of the CPE is ripe for virtualization. With this in mind, don’t expect today’s CPE appliances to disappear from the network’s edge any time soon.

See full post

Packet-based traffic management is the optimum combination for hybrid access protocols

HybridAccess_Blog

As has been highlighted many times, a major focus of attention for vendors like OneAccess continues to be on working on innovations that ensure businesses have reliable high-speed access to their Cloud-based applications.

There can be little doubt that the Cloud is rapidly changing the fundamental nature of computing for businesses large and small. Analysts such as IDC are even going as far as predicting that terms like public and private clouds will eventually disappear from our vocabulary just becoming the de facto standard for business IT provisioning by as soon as 2020.

If IDC is right, the communications’ industry and its supply chain, over the next five years, needs to agree on the standards framework that will ultimately drive the innovation needed to ensure reliable high-speed Cloud access for all businesses and individual users alike. At the moment for some, having a connection that they can depend on can still be a lottery based ultimately on their physical location and local link options.

Application performance and availability are the major factors that are determining the rate of Cloud adoption across the board, with restricted bandwidth and traffic congestion often cited as among the primary reasons for delayed migration. If users cannot be guaranteed that they will not be faced with frequent disruptions and poor quality of experience (QoE) they are unlikely to fully embrace the Cloud in the time-frame that IDC suggests.

In cases where fiber has not yet reached the cabinet (nor is likely to any time soon) the only realistic and viable solution to overcome these objections is to find ways of efficiently aggregating multiple connections to boost the capacity of the available links. There are several multi-path protocols such as IFOM that help boost performance in the Wifi/3GPP mobile networks, but an industry standard approach is yet to fully emerge for the aggregation of Wifi, LTE, xDSL and even broadband satellite links between the CPE and a central hybrid aggregation gateway.

See full post

Latest News

  • EKINOPS Celebrates MEF Technology Solutions Award Win

    EKINOPS (Euronext Paris - FR0011466069 – EKI),a leading provider of open, future-proof and flexible network solutions to service providers, has been recognised with a Technology Solutions Award at the 2019 MEF Awards, which took place during the leading industry conference, MEF19 in Los Angeles.

     
  • EKINOPS and IEC Telecom Group deliver next-generation maritime satellite communication solution

    EKINOPS (Euronext Paris - FR0011466069 – EKI), a leading supplier of optical transport equipment and router solutions for network operators, has launched with IEC Telecom Group, one of the leading global providers of managed network communication solutions, OneGate, an agile solution that protects the critical communications functions of maritime vessels.

     
  • EKINOPS to showcase joint SD-WAN Proof of Concept at MEF 2019 together with TELUS and Inmanta

    EKINOPS (Euronext Paris - FR0011466069 – EKI),a leading provider of open, future-proof and  fully flexible network solutions to service providers, has been selected by MEF to participate in the sixth annual MEF 3.0 PoC Showcase at leading industry conference, MEF19, which is taking place from 18 to 22 November 2019 in Los Angeles.

     

EKINOPS Worldwide

EKINOPS EMEA & APAC
Telephone +33 (0)1 77 71 12 00

EKINOPS AMERICAS
Telephone +1 (571) 385-4103

 

E-MAIL ALERTS

Receive automatically EKINOPS information in your inbox!