The Guardian attempts to commercialise openness

Having dismissed it for at least a decade, many of the world’s largest media organisations are warming up to the idea of charging people to access their websites. “I think when they’ve got nowhere else to go, they’ll start paying,” said News International head honcho Rupert Murdoch in a recent interview.

The UK’s Guardian News & Media (GNM), meanwhile, has very publically rejected this approach. “If you universally make people pay for your content it follows that you are no longer open to the rest of the world,” Guardian editor Alan Rusbridger said in a speech in January 2010.  “That might be the right direction in business terms … but it removes you from the way people the world over now connect with each other.”

This does not mean the media group is foregoing commercial opportunities, however. “Let’s be very clear about this,” GNM’s head of technology strategy Stephen Dunn told Information Age today. “’Open versus closed’ is orthogonal to ‘free versus paid’.”

This week, GNM unveiled the commercial model for what it calls its ‘open platform’. This consists of a trio of application programming interfaces (APIs), which software developers can use to access the Guardian’s content, and build it into their sites and applications.

GNM’s approach to monetising these APIs is to offer varying degrees of access in exchange for varying degrees of financial investment. It will be free to access metadata, such as the headline or date stamp of stories. At the other end of the spectrum, sponsors will be able to develop entirely new applications around Guardian content.

An example of the latter is tourist website Enjoy England, which has integrated Guardian reviews and lifestyle features into a map of the country.

Underneath the platform

Interestingly, the APIs do not call content directly from the Oracle database that supports its website    and that is hosted in GNM’s own data centres. Instead, all content is replicated in what Dunn describes as a ‘data repository’, hosted on Amazon Web Services’ EC2 cloud service. That repository is based on an open source, scalable search platform entitled solr, which allows third parties to search the content programmatically.

Solr was the search technology that met GNM’s stringent requirements, explains Dunn. “We’re putting the open platform on a commercial footing, so it has to have zero downtime, even if we’re migrating data off, and it has be incredibly reliable and incredibly scalable. And we needed to run this in the cloud, because we didn’t know how many partners were going to get on board. Solr ticked all the boxes, and was just so easy to use.”

Dunn expects that the cloud-based repository will become the data source for not only third party applications, but GNM’s own development projects too. “Our next iPhone app will be powered by this repository,” he says.

The solr repository was deployed with the help of Lucid Imagination, a support organisation that employs some of the original authors of the solr codebase.

“One of their consultants gave us an MOT of what we were doing,” recalls Dunn. “They managed to take our indexing time down from about 20 hours down to one hour.”

Pete Swabey

Pete Swabey

Pete was Editor of Information Age and head of technology research for Vitesse Media plc from 2005 to 2013, before moving on to be Senior Editor and then Editorial Director at The Economist Intelligence...

Related Topics