Security by design is a great principle, and is intended to help ensure that networks, systems, technologies and products are all designed and built securely in the first place. This means that once they are deployed and in use, we don’t have to worry, particularly when data is being stored, moved or being used by an application.
- Establish the context before designing a system: Before you can create a secure system design, you need to have a good understanding of the fundamentals and take action to address any identified short-comings.
- Make compromise difficult: Designing with security in mind means applying concepts and using techniques which make it harder for attackers to compromise your data or systems.
- Make disruption difficult: When high-value or critical services rely on technology for delivery, it becomes essential that the technology is always available. In these cases, the acceptable percentage of ‘down time’ can be effectively zero.
- Make compromise detection easier: Even if you take all available precautions, there’s still a chance your system will be compromised by a new or unknown attack. To give yourself the best chance of spotting these attacks, you should be well positioned to detect compromise.
- Reduce the impact of compromise: Design to naturally minimise the severity of any compromise.
It’s difficult to argue with these principles and the goal of security by design – but there is a weakness. Security by design is great if you’re building a brand new, stand-alone infrastructure from scratch or if you’re developing your own software application. But virtually every system, network or application is connected to something else, either via an API or the internet. So, while your system may be secure, there is no guarantee that the connected systems have been designed with the same rigour and attention to detail. Secure data silos are fine when data is inside, but real life means that data gets exported, and is then not protected.
Furthermore, the appetite and budgets for a rip and replace approach to IT are not what they used to be, which means there will be existing infrastructure components that will not be removed or software that cannot be retro-fitted with additional security.
One year on: what have we learned in cyber lockdown?
In competitive markets, where time to market can be the difference between success and failure, spending time on designing and testing robust cyber security is often seen as an unnecessary barrier, even if the expertise is available. The drive to innovate and bring new systems online quickly can easily lead to security vulnerabilities. We only have to look at the number of early IoT products that have been successfully hacked by security researchers and cyber criminals to realise the risks.
So, while we should definitely encourage and support the principle of security by design, we can certainly not become complacent and rely on them.
Time to focus on the data
A fundamental assumption on which the traditional approach to security is based is that you want to keep the attackers out. But this is simply impossible, otherwise we would not see the daily headlines about successful cyber attacks. So, if we can keep the cyber criminals from gaining access to our networks and systems, there needs to be another way of protecting data. IT security must rethink its traditional ‘castle and moat’ methods of protection and prioritise a ‘data centric’ approach, where security is built into data itself – data security by design, you could say.
And this means protecting data wherever it exists: in transit, in use and at rest. Data in transit is digitised information traversing a network, such as when sending an email, accessing data from remote servers, uploading or downloading files to and from the cloud, or communicating via SMS or chat. Data in use is information actively being accessed, processed or loaded into dynamic memory, such as active databases, or files being read, edited or discarded. Data at rest is stored in a digital form on a physical device, like a hard disk or USB drive.
Securing data wherever it exists ensures that if it is stolen at any point, it remains protected and therefore useless to the thief – even if extracted by an ‘inside’ member of staff. With transparent, 100% file encryption, all data will be protected no matter where it gets copied, because security is part of the file rather than a feature of its storage location. And by continuing this principle, IT security experts no longer need to spend hours tweaking data classification rules, so that ‘important’ data gets more strongly protected.
In the recent 2020 Ponemon report, 67% of respondents said that discovering where sensitive data resides in the organisation is the number one challenge in planning and executing a data encryption strategy, while 31% cited classifying which data to encrypt as difficult. Because what is ‘sensitive’ data, anyway? If information classification is used to drive data encryption policy, then there will be a significant amount of human error and information misclassified.
Network security in a world of encryption
Keeping it simple
Historically, there has been a trade-off between security and ease of use. For example, full disk encryption is easy to deploy, but security is compromised because a running system seamlessly decrypts any data for any process – legitimate or not. But with the technology and processing power to deliver both, full data protection that is transparent to the end user is in our reach. The ability to slide encryption technology in ‘behind’ other software, automatically secures data without having to change any applications. And by actively choosing to encrypt all data, all files, no matter where they are stored, we are finally designing security into the only thing which has value – the data itself.