When lightning struck a Google data centre, data was lost forever – but should this ever happen?

Losing data is never a good thing, but losing data forever, for a business is unthinkable. That’s why I had to respond immediately when I read the article that ran this morning in Cloud Pro about the lightning strike that hit one of Google’s data centres four times and has resulted in some people losing their data forever.

Apparently a number of the disks in the Belgian data centre were completely wiped, meaning some people have permanently lost files. People losing their data forever is not a statement that anyone ever wants to see.

> See also: Eight secrets most backup vendors don’t want you to know

It’s also not a message that a data storage provider ever wants to have to give to their customers. But this is the message that Google is going to be delivering or has already had to deliver to some of its customers today.

The statement that Google has issued stating that only 0.000001% of disk space was permanently affected will, I am afraid, not be of any interest to the individual customers that have had their data wiped.

Google went on to say that in almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state, but that in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.

In my view this is all symptomatic of the limitations of always-on disk-based storage. Storage that is susceptible to mains electricity surges and spikes (and the even more terrifying but much rarer electromagnetic pulse (EMP)).

It is also a wake-up call for people who entrust their data (and it would seem, the only copy of their data) to storage providers like Google that use traditional disk-based storage.

This highlights that not only is disk short-term, expensive and not very environmentally green, but it is also fragile.

> See also: Why backup and recovery needs to be strategic, not siloed

Google’s Compute Engine has suffered a number of problems recently, with outages caused by software updates and various other issues that are under investigation, so this is definitely not good news for the company.

Data storage and archive services should be designed to deliver data integrity over very long periods of time (decades), irrespective of the data volumes. 

Google did go onto explain that auxiliary systems managed to restore the power quickly, but it was the storage devices that had previously suffered degeneration from extended or repeated battery drain that lost some recent information. Well that’s okay then – as long as you are not one of the companies that have unfortunately lost their data forever.

Sourced from Nik Stanbridge, VP Marketing, Arkivum

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Data Storage
Google