As the world becomes increasingly reliant on technology, it has never been more connected and distributed. People have become accustomed to using cloud-based services and mobile devices on a daily basis in their working lives, namely with the aim of improving business efficiency.
However, these technologies, along with the arrival of the ‘app store economy’, are having a resounding impact on the proportion of business logic that is executed on inherently insecure devices.
This has created a challenge for software developers to help ensure that their software has the ability to run in environments that they can have little control over.
In the face of this challenge, Facebook announced that from October 2015 application developers will be required to move to a more secure type of hashing algorithm, SHA-2, in support of digital signatures for their apps.
This is a great development and, as application developers move in this direction, it is also important that they recognise the importance of proper protection of signing keys when developing and releasing code.
Signing key security is the pillar of code-signing technology. It is essential for proving the source of software and the identity of its publisher, as well as ensuring that it has not been tampered with since it has been published.
Digital signatures differentiate themselves from electronic versions as they go further by invoking cryptographic techniques to increase security and transparency, and this is essential for establishing trust and legal validity. But requiring code to be signed is not sufficient to ensure security.
An integral element in securing the code-signing process is the strong protection of the private signing key. If a code-signing key is lost it may become impossible to publish any further upgrades thus causing business disruptions and user dissatisfaction.
If a key is used in conjunction with a weak algorithm or is stolen, an attacker may be able to sign a malicious upgrade that either steals sensitive data or make millions of devices inoperable.
If a private key becomes known to anyone besides the authorised individual, they have the power to create digital signatures that will be seen as ‘valid’ when verified using the associated public key. It will even appear to come from the organisation identified in the associated digital certificate.
A changing landscape
Today’s threat landscape continues to evolve, with the rise in malware being the most significant change. Business applications running on host servers are increasingly vulnerable to advanced persistent threats (APTs), which are introduced through malware, as well as threats such as hacking and insider attacks.
In the case of APTs, attackers have the opportunity to change application code or device firmware without it being noticed. This is often linked to corporate data theft, but it can progress into a far bigger issue.
Malware can impact anything from smartphones to critical infrastructure such as traffic lights and computers in planes, making the potential impact of a lost code-signing key catastrophic.
Such threats have put more pressure on developers and the security professionals responsible for their release procedures to improve their security practices, and to expand the scope of software being signed to other tools such as scripts and plug ins.
This is especially true when you consider the fact that application code is like gold dust to attackers – it can provide targeted run-time access to high-value data that is not otherwise protected.
So why are signing keys so difficult to protect? Firstly, these keys are often held on developer workstations, where the focus of the environment is development productivity as opposed to system security – attackers know this and are taking advantage.
Code-signing approval processes can be particularly challenging for medium-to-large software organisations, where the volume and distribution of software-build stations warrants a more centralised approach leveraging shared services and resources, which introduces additional key management complexities.
Although hardware-based security may sound incongruous as a solution to software and cloud-based vulnerabilities, it remains a tried-and-tested anchor of trust in the vast sea of untrusted processes. It is important to remember that all virtualised workloads are deployed on a hardware platform, in a physical location, at one point in time.
Digital or electronic time-stamping technology can further augment the security of a code-signing system by providing an additional means to validate precisely when code was signed via an embedded trusted time stamp – creating an auditable pathway to a trusted source of time.
Needless to say, as confidence and reliance in automation grows, it has never been more important to trust the infrastructure that allows it to. In a digital world, code-signing processes, private code signing keys and digital certificates are of critical importance.
Sourced from John Grimm, Thales e-Security