by Thor Olavsrud

How AWS Helped Scale App for World Cup Social Media Hub

Aug 05, 20144 mins
Cloud ComputingIT LeadershipIT Strategy

if you're going to operate the official social media hub for the FIFA World Cup, you've got to be able to overcome the problem of scale. Spredfast did it with flying colors with the help of Amazon Web Services.

happy soccer fans
Credit: ThinkStock

Running a social media hub for any large event poses a challenge, but perhaps none quite so daunting as running the FIFA Social Hub, the official social platform of the 2014 FIFA World Cup.

In terms of audience, not even the Olympic Games surpass the FIFA World Cup. The U.S.- Portugal match on June 22 drew 24.7 million viewers in the U.S. alone, and nearly 43 million people tuned in to Brazilian network TV Globo for Brazil’s opening game against Croatia. By contrast, the last and most-viewed game of the 2013 World Series between the St. Louis Cardinals and Boston Red Sox drew 19.2 million viewers.

Hordes of people viewing the matches on television and in-person were engaging with social media throughout the experience. The FIFA Social Hub was built to draw on the torrential stream of data from Twitter and Facebook, exposing social activity around the world displayed on an interactive map. It had a sortable trending piece that could be arranged by team or player and a mosaic that could be filtered by team or to show what fans were saying about a particular match.

How Do You Scale for That?

Hosted by FIFA and its partner Adidas, the social hub itself was created and run by social media management company Spredfast, which drew on Amazon Web Services (AWS) to meet the enormous scaling challenges involved.

“We served it in over 30 different languages, including Arabic,” says Eric Falcao, CTO of Spredfast. “All of it was constantly updating in real time. When events happen, social media explodes. On the back end, our software needs to look at every single piece of content as fast as possible and make a decision about what customer it’s relevant to and then execute their business rules. In a normal day, you might see 5,000 items per second, but in the middle of a game after a goal is scored, it will burst to 50,000 items per second.”

And during the Germany – Brazil match, in which Germany shocked the world by taking down the host nation with a score of 7 – 1, Falcao says those 50,000 item per second spikes were sustained for 10 to 15 minutes at a stretch.

But despite the massive load, there were no hiccups and no service interruptions, Falcao says.

“Our hiccups came early in our life as a business,” he says. “Years ago we learned how to deal with this stuff. This was very smooth.”

Been There, Scaled That

Spredfast is no stranger to managing social media for large-scale events. It was Spredfast’s technology that powered the Obama administration’s Twitter Town Hall event in May 2012, when the president spent 21 minutes answering questions via Twitter.

“Our technology filtered the tweets, found the best questions and allowed for workflow to bring those questions to the president,” Falcao says.

Still, even an event like that doesn’t compare with the magnitude of the FIFA Social Hub. Falcao says it was only possible because Spredfast was born in the cloud and was able to leverage the power of AWS.

“Cloud thinking was in our DNA as an engineering team,” Falcao says. “As we grew, we’ve had many events like this — elections, domestic and international sporting events. We’ve always had to deal with that scale, even when we were very young as a company.”

Turning to Amazon for Ramping

The hub used Amazon CloudFront as its content delivery web service, and used Amazon EC2’s Auto Scaling capability to ramp servers up or down based on need. Falcao says that at times front-end traffic was growing 15x.

Spredfast also relied on Amazon Elastic Load Balancing, which automatically distributes incoming application traffic across EC2 instances. That allowed it to scale along with its API requests, even when those API requests hit a peak of more than 300 million per hour.

“One thing that we did do differently, because the games were on a schedule, [was] set up some triggers,” Falcao says. “But instead of waiting for some metrics and triggers to go off, we overprovisioned 15 minutes prior to game time. It would ramp up the amount of compute resources we needed. We had that so dialed in that we had a script that did the scale up and scale down jobs. We didn’t have to be there to push a button.”

Follow Thor on Google+