The five steps to optimise your firewall configurations

As security threats become more and more advanced, managing your firewall correctly has never been more important. IT professionals spend much of their time worrying about flaws and vulnerabilities but according to Gartner research, 95% of all firewall breaches are caused by misconfiguration, not flaws.

Firewalls are an essential part of your network security, and a misconfigured firewall can damage your security irreparably and give easy access to an attacker. Businesses make many mistakes in their firewall configurations, most of which are easily avoidable.

> See also: Six network security checks to mitigate the risk of data security breaches

Here are five simple steps that can help enterprises optimise their firewall configurations and avoid making the mistakes that most often leave networks and their organisations vulnerable.

Set specific policy configurations with minimum privilege

Firewalls are often installed with broad filtering policies, allowing traffic from any source to any destination. This is because Network Operations don’t know exactly what they need and decide to start with this broad rule and then work backwards.

However, the reality is that, due to time pressures or simply not regarding it as a priority, they never get round to defining the firewall policies, leaving your network in this perpetually exposed state.

Organisations should follow the principle of least privilege – that is, giving the minimum level of privilege that the user or service needs to function normally, thereby limiting the potential damage caused by a breach.

Enterprises should also document properly – ideally mapping out the flows that your applications actually require before granting access. It’s also a good idea to regularly revisit your firewall policies to look at application usage trends and identify new applications being used on the network and what connectivity they actually require.

Only run required services

All too often companies run services on their firewalls that either don’t need to be or are no longer used, such as dynamic routing, which typically should not be enabled on security devices as best practice, and 'rogue' DHCP servers on the network distributing IPs, which can potentially lead to availability issues as a result of IP conflicts.

It’s also surprising to see the number of devices that are still managed using unencrypted protocols like Telnet, despite the protocol being over thirty years old.

The solution is to harden devices and ensure that configurations are compliant before devices are promoted into production environments. This is something that a lot of organisations struggle with.

By configuring your devices based on the function that you actually want them to fulfil and following the principle of least privileged access – before deployment – you will improve security and reduce the chances of accidentally leaving a risky service running on your firewall.

Standardise authentication mechanisms

During my work, I often find organisations that use routers that don’t follow the enterprise standard for authentication. One example I encountered is a large bank that had all the devices in its primary data centers controlled by a central authentication mechanism, but did not use the same mechanism at its remote office.

By not enforcing corporate authentication standards, staff in the remote branch could access local accounts with weak passwords, and had a different limit on login failures before account lockout.

This scenario reduces security and creates more opportunities for attackers, as it’s easier for them to access the corporate network via the remote office. Enterprises should therefore ensure that any remote offices they have follow the same central authentication mechanism as the rest of the company.

Use the right security controls for test data

Organisations tend to have good governance stating that test systems should not connect to production systems and collect production data, but this is often not enforced because the people who are working in testing see production data as the most accurate way to test.

However, when you allow test systems to collect data from production, you’re likely to be bringing that data down into an environment with a lower level of security. That data could be highly sensitive, and it could also be subject to regulatory compliance.

> See also: The firewall isn't dead – it's just growing up, and policy has to grow with it

So if you do use production data in a test environment, make sure that you use the correct security controls required by the classification the data falls into.

Always log security outputs

While logging properly can be expensive, the costs of being breached or not being able to trace the attack are far higher. Failing to store the log output from their security devices, or not doing so with enough granularity is one of the worst things you can do in terms of network security; not only will you not be alerted when you’re under attack, but you’ll have little or no traceability when you’re carrying out your post-breach investigation.

By ensuring that all outputs from security devices are logged correctly organisations will not only save time and money further down the line but will also enhance security by being able to properly monitor what is happening on their networks.

Enterprises need to continuously monitor the state of their firewall security, but by following these simple steps businesses can avoid some of the core misconfigurations and improve their overall security posture.

Sourced from Kyle Wickert, lead solution architect, Product and Deployment, AlgoSec

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics