Seatwave rehouses web infrastructure to meet spiky traffic

Seatwave is an online secondary marketplace where users can buy and sell tickets to gigs, concerts and sporting events. Founded in 2006, today the site attracts millions of page impressions every day, and well over a million tickets have been sold through the marketplace.

When ecommerce operations manager Perry Dyball joined the company in 2008, the infrastructure hosting its ticketing platform was “tiny”, he says, and was struggling to keep up with growing demand.

“It was a classic start-up situation,” he explains. “The business was having problems with reliability and uptime because there were huge levels of growth in the early days. The infrastructure needed to be scaled out.”

Dyball was brought in to help Seatwave ‘scale out’ the supporting infrastructure, and in 2009 the company moved to what he calls a “very traditional” hosted web infrastructure solution.

“We had a typical multi-tier architecture, with a web server, application server and database server,” he says. “There was a disaster recovery site in a different data centre. It was a big step away from what Seatwave had at the time.”

However, two years later the market had moved on. The maturity of virtualisation and the advent of cloud computing prompted Dyball to seek new alternatives.

Cloud computing in particular promised to help Seatwave deal with peaks in demand, which typically followed TV advertisements for the site. “We see sharp spikes in traffic when we’ve done TV advertising,” he says. “Traffic would hit us within 15 seconds of the advert starting to play.”

However, so steep are Seatwave traffic spikes that an automated cloud hosting system that scales according to demand would not be able to keep pace. “No autoscaling solution in the world can deal with them,” he says. This eliminated Amazon Web Services from Dyball’s selection process.

Instead, he says, Seatwave needed to be able to “pre-provision” servers in advance of an anticipated spike. This prompted the company to select PEER1 Hosting, which offers what Dyball describes as “agile data centre” services.

“Previously, when I wanted to deploy some new servers there would be a 12-week procurement process, and then a four-week build,” he explains. “Because PEER1 uses stock-level hardware, they’ve got tonnes of it just sitting there and will add it to your environment almost on demand. It’s not instant scalability, but if I’ve got a big project coming up, they can bring new hardware into the mix very quickly.”

While having pre-provisioned infrastructure lets Seatwave cope with very rapid traffic spikes, it does mean there is an overall limit to the amount of traffic that the site can handle.

Dyball has defined a “stop limit” on the PEER1 infrastructure, which he describes as “the point where customers are still getting a good experience”. “There’s no point in having a page that takes 20 seconds to load. The rate at which people transact over the site nosedives at that point,” he says.

Once demand exceeds that pre-agreed capacity, customers are directed to a cloud-hosted queueing system to wait until more capacity becomes available.

Seatwave’s infrastructure is delivered from two separate PEER1 data centres simultaneously, ensuring continuous service in the case of an outage. Traffic to the infrastructure is routed through two load balancing systems – one from F5 that distributes traffic between servers within a data centre, and another from Dyn that directs traffic to the appropriate site based on current workloads.

The database that supports the website is replicated between each data centre too, although only one of them is live, with the other “just taking shadow copies of the data all the time,” Dyball explains.

This resilience guarantees continued service, says Dyball. “Even if I lost a whole data centre right in the middle of a major sale, I would still have capacity. I might have to start sending customers to the queue sooner, but I’ve got that capability to keep our site up, running and transacting.”

Dyball’s aim now is to enhance the cloud-hosted queuing system so that customers are engaged with the site, not simply waiting to be served. “I want to make it a lot richer so that it can become a true browsing experience rather than just a queuing experience,” he says.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics