We’ve all met that person in business who feels that IT isn’t that important. A couple of weeks ago, we saw how a failure in IT could cost millions of dollars and bring an entire company to its knees.
In the wake of the near-collapse of Knight Capital Partners, there has been much discussion around the merits and dangers of high frequency trading. HFT, as a major pillar of computational finance, has increasingly relied on high performance computing resources to process financial transactions at blistering speeds. And although there are many who believe that the increased trading volume and market liquidity benefit the market and economy as a whole, this latest incident has once again cast some serious doubt over the seemingly disturbing side effects.
High frequency trading involves processing massive amounts of trades in short amounts of time – sometimes executing millions of trades in a single second. The technology behind this incredible processing power? High speed HPC clusters, super-fast network connections with low latency, and sophisticated analytics software. Although the technology itself has been around for roughly a decade, Wall Street’s adoption has only seriously been growing in the last four or five years.
At the time of the above article’s release, the exact cause of the problem was not yet known. Now, it has emerged that a software update was to blame.
The tremendous power of software was illustrated last week at Knight Trading. A software upgrade for the NYSE company’s financial trading system introduced an algorithm from an old program that produced erratic trading behaviour, causing US$440 million losses in just 45 minutes.
That amounted to over US$10 million lost for every minute and brought the company to its knees. Just imagine, in the time most of us take for lunch, a respected company with 1,400 employees, plunged into crisis.
Although it asked the SEC to cancel the trades, the SEC took the stance that the trades would stand.
Tech lawyers read cases about the tremendous amount of financial loss poor software can cause (one of the key cases involved billing software created for a municipal water supplier) and we are reminded that software contracts need to be drafted carefully and software testing can never be compromised.
IT hardware is the bottom foundation of computational finance. Software is the second foundational layer that HFT rests on. These two work together, in conjunction, as a complete IT-computational ecosystem. Both layers need to be functioning properly in order for the applications and the company to stand. Vulnerabilities in one (in this case the software) can cause everything to break down.
We’ve been working on developing an exciting line of hardware solutions for computational finance. The initial production systems have already been deployed. Our systems have been thoroughly tested, benchmarked, and fault-protected.
But no matter how good the hardware, the software that runs on it needs to run as flawless as allows the entire ecosystem to support its business.
Have you had experiences with a failure in your financial IT ecosystem? How did you fix it?
Moreover, with all of these questions about the nature of high-frequency trading, do you believe it’s a good thing? Or is the instability that can results too great a danger even with the most solid technology base?