How to Secure Your Future With Robust Risk Assessment

Richard Gardner, CEO of Modulus Financial Engineering, offers his advice on how to avoid, prevent and mitigate the risks associated with high-volume automated software -- and the employees who wrote the code.

Last month, Knight Capital Group lost $440 million in half an hour due to a bad automated financial trade. The loss disrupted the market, hurt consumer confidence and, according to Business Insider, led the Security and Exchange Commission (SEC) to consider new regulations for software that controls financial transactions.

Meanwhile, most of the folks in the know aren't talking.

Enter Richard Gardner, CEO of Modulus Financial Engineering. Based in Scottsdale, Ariz., Modulus has offered financial products and consulting to the industry since 1997. Today the company has 55 employees and a customer list that includes Barclays, Bank of America, Chase and E*Trade.

Richard Gardner
Richard Gardner, CEO, Modulus Financial Engineering

Company founder Gardner began trading in financial systems at 15, using his family's account. At 23 he wrote his first software system to assist with trade; it analyzed commodities prices based on crop and weather data.

In 13 years inside the financial services industry, Gardner has seen ups, downs, evolution and a crash or two. Let's hear what he has to say about the risks associated with high-volume software, how to conduct a security risk assessment and how to protect your firm from both internal and external cybercriminals.

Be on the Lookout for Accidental, Intentional Bugs With the recent Knight Capital Story and related software "glitches" in the equities market, do you see high-frequency trading as a risk? Is it a possible security threat?

Richard Gardner: Yes. With the apparent lack of software testing and QA demonstrated by certain financial institutions recently, there are obviously associated security threats and potential cyberwar targets within the financial industry. Electronic stock exchanges and financial institutions are near the top of the list for targets, according to Kaspersky Lab. Talk for a minute about the general risk with automated trades. Are other companies exposed to this problem, even ones not on Wall Street—say, those who provide automated EDI transactions? What should they be doing to reduce this risk?

Gardner: One day I will tell stories to my grandchildren about how I used to trade the stock market by calling a human broker on a landline telephone. I would give my broker the order to buy a stock at a certain price and simultaneously place a "stop loss" order below the price where I purchased, so that if the market turned against me, I would be automatically "stopped out" and would suffer only a small loss on my total account. Even in that scenario, a form of automated trading was occurring. It was just that my broker was carrying out my instructions and not a computer.

Of course, a computer can perform instructions much faster than a human, [but] a computer doesn't have the common sense to identify problems unless explicitly programmed to do so. If the computer is not programmed to detect errors, it may accept erroneous orders and carry out the instructions—millions of times per second.

That's how a flash crash usually happens. This can happen in financial markets, with automated EDI systems or anything else for that matter. Robust error handling and risk management are two fundamental requirements for automated systems.

News: IT Audit Survey Exposes Weak Risk Assessment We know that, in many cases, these high-frequency trading applications are developed under time pressure, with little oversight or compliance. The resulting accidental bugs are how we get into this mess. Little oversight and compliance get me thinking about purposeful bugs. Is there a risk that someone uses these algorithms to defraud a company? If yes, what are some things you expect companies to do to mitigate this threat? How many do you see doing it?

Gardner: Yes, there is a risk that someone may purposefully introduce bugs to defraud a company. Just like in the movie Office Space, [in which] employees who hate their jobs decide to infect the accounting system with a computer virus designed to divert fractions of pennies into a bank account they control. We recommend routine and random auditing by an outside group to review everything from the top down, including logs, network security and source code. Employees should know about these random audits, expect them and be ready for them.

"Checking Into Work…Should Be Like Going Through Airport Screening" In your experience securing companies, what do you do first?

Gardner: The first step is to create a list of intellectual property that needs to be protected. The list needs to be thorough and should include intangible assets such as email, sales lists, employee data, trade secrets, company forecasts, source code, VoIP communications and trade secrets, such as trading system algorithms or hardware schematics.

It's important to list everything as either secure or insecure, not in-between. That's because security holes are often found wherever the risk of inaction vs. the cost of action varies down the pipeline. Remember, hackers look for back doors first, not front doors. Then we create a list of potential threats such as network security, insecure or easy-to-steal passwords, malware, spyware, Internet logging, accidental or intentional backdoors in software, issues with physical security and haphazard employee behavior. I realize that all companies are different, but, to follow up, can you tell me some stories of what you found at those assessments, and what corrective action you recommended?

Analysis: How Integrating Physical and Information Security Mitigates Risks

Gardner: What makes it easy for thieves to target software and networks is the virtually guaranteed existence of one or more security holes. But even when no vulnerability can be found, thieves may resort to wiretaps, bugs or even dumpster diving. We have found everything from cell phone bugs, key loggers and spyware to software backdoors.

Haphazard employees can be a problem, too. For example, software developers often turn to search engines and programming communities for help, pasting entire segments of company program code on public forums. Or an employee may use the same password at work and with other accounts at home, on a less secure connection. Usually these are unintentional security mishaps, but what we've found is that humans are less predictable than computers, and companies may benefit by requiring security training of their employees. What does a security breach look like?

Gardner: The most serious security breaches are usually done by insiders who know the system. The FBI page called Insider Threat outlines what that type of breach looks like. That page lists several motives and real life examples, such as the 42-year-old Sergey Aleynikov, a computer programmer who worked on Wall Street from May 2007 until June 2009 and stole millions of dollars' worth of proprietary source code. The company discovered irregularities through its routine network monitoring system and informed the authorities.

One of our clients on Wall Street contacted us after facing a similar problem—two computer programmers had gained access to invaluable trading system source code by hacking into a server. After the fact, we learned that the client's server shared the same password as the one that was used on the programmers' source code repository. Once you found those breaches, what did you do to address the problems?

Gardner: The client changed its server password, but it was too late. It's a painful reminder that intellectual property and trade secrets cannot be retracted once they are stolen. Intellectual property is not a tangible thing that can be tracked down and returned to its rightful owner. How are security and privacy related in your mind? What should companies be doing with regards to privacy—and who should be deciding?

Gardner: There's been an IEEE Symposium on Security and Privacy since 1980. It's not exactly the theory of relativity, but security and privacy may be one and the same, or two separate things, depending on place and time.

Security and privacy are one and the same while at home or off work. But while at work or during work hours, security and privacy are two separate concerns—as long as someone is gainfully employed, security takes precedence over privacy to a certain degree. In principle, checking into work should be a lot like going through airport screening.

To Prepare for Cyberwar, "Leave No Doors Open" Earlier you mentioned Kaspersky Lab. In the current issue of Wired, Eugene Kaspersky claimed to have found the first known cyberweapon: a Trojan designed to attack Iran's computing centers. Working backwards, this implies that the only active cyber warfare program is in the United States. Do you think that's accurate? Do you think the threat of cyber-warfare is overblown—and, if not, to what kind of risks are the world's financial systems exposed?

Commentary: Does a Cyber-9/11 Loom?

Gardner: The threat of cyber warfare is real. The U.S. military is aggressively expanding preparations for cyber warfare, even as other defense programs are being trimmed. Taking cyber espionage and sabotage into account, the United States probably isn't the only country with an active cyber warfare program. For example, in 2011 MI6 reportedly infiltrated an Al Qaeda server and changed the recipe for making a pipe bomb with a recipe for baking cupcakes. In most of these examples, the attack is a country and/or its intelligence agency, and the victim a sophisticated organization. What, and how, should a typical IT shop respond? Say the shop has credit card information, Social Security numbers, an Internet presence and some government contracts. How do you asses the risk to the organization? How do you compare Internet risks to external risks?

Gardner: Risk is dynamic and subject to often unquantifiable constant change. While a company may be targeted by a hostile nation for working on a small government contract, Joe's Barbershop may get hit with a relatively severe attack from hacktivists for a socially motivated concern.

Although there are risk assessment methodologies such as OCTAVE, ISO 27005 and the National Institute of Standards and Technology's NIST SP 800-30, it's best to leave no doors open for adversaries to exploit. Credit card info should not be saved. Confidential information should be encrypted. DDoS mitigation plans and options should be in place. Networks and code should have no security holes. Random and routine audits should be performed.

Analysis: Failing to Manage Risk May Be the Biggest Risk of All

It's important to understand what is at risk. A company's data, trade secrets, operation and reputation are at varying degrees of risk at all times. Is there something different a software leader should be doing than a security leader? Can't we just leave it up to the PCI compliance guy? If not, what should we be doing? My experience with a lot of this is too much fake security that slows down software development without adding value. How can we separate the wheat from the chaff?

Gardner: Security minded programming needs to be an integral part of software development, even from the conceptual design phase. Software projects should also have security experts on the team, so developers can turn to the experts for guidance every step of the way. Where can readers learn more about threat and risk assessment and mitigation?

Gardner: We maintain a collection of links to white papers hosted by NIST.

Matthew Heusser is a consultant and writer based in West Michigan. You can follow Matt on Twitter @mheusser, contact him by email or visit the website of his company, Excelon Development. Follow everything from on Twitter @CIOonline, on Facebook, and on Google +.

NEW! Download the Winter 2018 digital edition of CIO magazine