by Shane O'Neill

Stress Test: Bloomberg CIO Deals with Stock Market Data Surge

Aug 11, 20115 mins
CIOComputers and PeripheralsData Center

During the stock market's most volatile week in years, the data throughput of Bloomberg LP's trading and financial news network has been put to the test. Here's how the company prepared for the sudden data explosion.

Ever since Standard & Poor’s downgraded the U.S. government’s credit rating last week, the global financial markets have been on a wild ride.

In the midst of a week where the Dow Jones Industrial was down 630 points on Monday, up 420 points on Tuesday then down 520 points on Wednesday, financial media and technology company Bloomberg LP has been at the eye of the storm.

An unprecedented amount of stock trade data, stock quotes and other financial news and information has flowed through the Bloomberg Professional service, a platform for stock trading tools and financial and investment information used by more than 300,000 financial professionals globally.

Bloomberg processed 41 billion ticks — a tick is a change in a security’s trades, bids and offer prices — on Friday Aug. 5, a 33 percent increase over the last major market peak in March 2011 (the Japanese earthquake and tsunami). This compares with 20 billion ticks for the financial crisis in 2008, 27.5 billion ticks during the flash crash of May 2010 and 30 billion ticks after the Japan tsunami. (See chart below)

Bloomberg processed twice as many ticks this week as it did during the financial crisis of 2008.

All of this added up to one nerve-wracking week for Bloomberg’s CIO Vipul Nagrath as his months of testing and preparation for extreme data throughput was put to the ultimate test.

Nagrath talked to senior editor Shane O’Neill about how his IT staff has been handling one of the most volatile trading weeks in stock market history.

Is this the most trade data volume you’ve ever seen? Even more than the stock market meltdown of 2008?

Since last Thursday, tick volumes have been unprecedented. Both the data rates and daily aggregate data volumes are the highest we’ve ever seen. They are up dramatically compared to any recent event, including the 2008 crash, the tsunami in Tokyo, and the flash crash of May 2010. The Japan tsunami pushed tick volume up 8 percent from where it was after the flash crash. But this week we’re up 33 percent from where we were after the tsunami. It’s been dramatic.

What kind of adjustments did you have to make to Bloomberg’s data systems to accommodate the recent volatility?

We’ve been anticipating volatility and we did a lot of testing to prepare for this kind of trading volume. But that doesn’t mean that we slept easy. We didn’t know how high it would go so we were on high alert to make sure nothing went wrong.

There’s a sustained throughput the entire day, and the concern was that the rate of trading data volume could spike so quickly that we couldn’t handle the throughput. But we were able to handle both the massive incoming per-second rate of data as well as the aggregate data volume for the day.

On the fly, we had to make sure we were covered people-wise, that someone was watching our systems. As far as actually coding or tuning the system, we didn’t have to do anything because we’ve spent the last two months preparing for just this type of event. And that preparation paid off.

What did that preparation entail?

We needed to ensure we had enough bandwidth so our hardware could handle both the data rate and aggregate data volume. In addition, we had been proactively stress testing our systems to see where bottlenecks would show up. We stress tested beyond any data throughput we’ve ever seen and we played that back into our system at two times, three times, four times the original speed. We then studied where the bottlenecks showed up, and made some software updates to handle them. We have been doing these tests for the past couple months.

When do you find time to do QA testing?

Tests are hard to schedule because the market is busy all day long, so we obviously can’t do these tests mid-day or any time during the week. We only get a couple hours on the weekend. So on Saturdays we’ve been running our system through our QA environment before we get to production, and see how hard we can push it, see where the bottlenecks are, fix any problems, and then we move it up to production.

We were anxious all of last week but the other shoe never dropped. On Friday, however, when I read about the credit rating downgrade I was full of nervous energy and anxious. We spent a lot of time over the weekend preparing for Monday.

You have many big financial clients. Have you heard anything from them about the performance of Bloomberg’s services this week — good, bad or otherwise?

No. Clients have not had any major issues so far. Our goal is to not bother clients with how much processing we’re doing in the background. As long as they are getting the data they need on their screens, that’s all that matters. Nobody called to compliment us, but better yet nobody has called to complain. And in our world, no news is good news.

Shane O’Neill covers Microsoft, Windows, Operating Systems, Productivity Apps and Online Services for Follow Shane on Twitter @smoneill. Follow everything from on Twitter @CIOonline and on Facebook. Email Shane at