Virtualised network management in the age of encryption


Traffic encryption is helping us to keep our data private and that’s a good thing.

In the post-Snowden era, safeguarding vital information like credit card numbers, passwords, and other identifiable data from anyone who gains unauthorised access to network traffic has become more important than ever before.

According to Google’s most recent Transparency Report, anywhere from 50 to 90% of traffic on the average broadband network is now encrypted.

This figure will only continue to rise as application vendors and content providers implement new techniques to protect consumer privacy.

From a technology perspective, the main reason for this growth is two-fold; encrypting traffic is now cheaper than in the past (as certificates are available for free, and there are more efficient processors) and performance is better at the transport level (HTTPS using TLS 1.3 can have the same overhead as HTTP in the clear).

>See also: 5 ways to copy virtual application traffic for monitoring

However, while  is essential for our collective privacy, it also presents a major challenge for network operators when it comes to traffic visibility and identification for the purpose of CEM and QoE management.

Encryption has already had a tremendous impact on the tools traditionally used, and the increased adoption of network virtualisation has only amplified this issue further where lightweight virtual machines are required. But now there’s light at the end of the tunnel.

Addressing the encryption issue

When you consider all threats to data security and online privacy that exist, encrypting web browsing traffic is logical.

This type of data doesn’t stress the network and isn’t a significant problem to network management.

The challenge for operators begins when data-hungry services, such as Netflix, start to encrypt their content too.

Without being able to effectively label this type of encrypted data, an operator may only be able to correctly identify this traffic as SSL.

However, this category is largely unhelpful for network management as widespread encryption means this classification will often group several data sources together, including anything from VoIP to video streaming.

Tackling this problem depends on a number of different factors. First and foremost, operators need end-to-end visibility into all new services and content platforms likely to cause a surge in demand on their networks, as it’s widely accepted that poor network performance is linked to an increase in subscriber churn.

It’s also essential for operators to know when and where a network surge is likely to occur, which comes back to the pressing need to be able to identify specific applications behind encrypted network traffic.

>See also: The virtualised network: the backbone for future businesses

Network intelligence and traffic management tools such as Auvik, supported by Deep Packet Inspection, have typically been seen as vital for operators to address this issue, reduce churn, and, in turn, better target different user sets.

By using DPI, operators have been able to recognise what data is flowing across their networks in real-time, prioritising certain traffic where appropriate and identifying where the network is congested in order to take steps to address it.

Virtualised networks, real problems

Although the challenge posed by encryption is currently being addressed without impacting on the protective benefits it holds for privacy, new problems are on the horizon.

Increasing reliance on virtualised networks and NFV has renewed the challenge that service providers face.

NFV deployments introduce further application identification concerns, as virtualised networks are often highly distributed and can dynamically change their behaviour and traffic routing protocols based on current network conditions.

Virtualised networks are also typically deployed on COTS hardware, which poses an additional set of issues for packet processing.

Both of these trends have fundamentally changed how DPI engines must operate in order to successfully deliver their core function of application identification.

Fundamentally, to maintain the same level of end-to-end network visibility, DPI tools must also be virtualised to have the same level of speed and efficiency as the virtualised network they are deployed on.

>See also: Cloud computing for IT business leaders in the enterprise

Once network visibility is returned, operators then need to ensure it remains there by partnering with a DPI vendor that is constantly updating their signatures database to ensure traffic can be accurately identified.

Applications evolve quickly in the internet economy and a change made by a major application (Netflix, YouTube, Skype, etc) could cause huge inaccuracies in analytics or a major blow to QoE management if network traffic wasn’t appropriately classified.

Moving beyond encryption

With the increasing adoption of NFV, end-to-end visibility of the traffic trends across a network allows operators to prioritise their investments into new technologies and expansion, achieving maximum ROI by identifying what changes will have the biggest positive impact on the overall subscriber experience.

The challenges posed to DPI by encryption and NFV are not insurmountable.

By reconsidering the role of DPI, and working with a vendor who can deliver virtualised solutions that keep pace with demand, networks need not remain in the dark.


Sourced by Beatriz Rojo Martin, product manager, signatures and heuristic, at Procera Networks

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics