The CFTC is currently considering Regulation AT (for Automated Trading). It is the Commission’s attempt to get a handle on HFT and algorithmic trading.
By far the most controversial aspect of the proposed regulation is the CFTC’s demand that algo traders provide the Commission with their source code. Given the sensitivity of this information, algo/HFT firms are understandably freaking out over this demand.
Those concerns are certainly legitimate. But what I want to ask is: what’s the point? What can the Commission actually accomplish?
The Commission argues that by reviewing source code, it can identify possible coding errors that could lead to “disruptive events” like the 2013 Knight Capital fiasco. Color me skeptical, for at least two reasons.
First, I seriously doubt that the CFTC can attract people with the coding skill necessary to track down errors in trading algorithms, or can devote the time necessary. Reviewing the code of others is a difficult task, usually harder than writing the code in the first place; the code involved here is very complex and changes frequently; and the CFTC is unlikely to be able devote the resources necessary for a truly effective review. Further, who has the stronger incentive? A firm that can be destroyed by a coding error, or some GS-something? (The prospect of numerous individuals perusing code creates the potential for a misappropriation of intellectual property which is what really has the industry exercised.) Not to mention that if you really have the chops to code trading algos, you’ll work for a prop shop or Citadel or Goldman or whomever and make much more than a government salary.
Second, and more substantively, reviewing individual trading algorithms in isolation is of limited value in determining their potentially disruptive effects. These individual algorithms are part of a complex system, in the technical/scientific meaning of the term. These individual pieces interact with one another, and create feedback mechanisms. Algo A takes inputs from market data that is produced in part by Algos B, C, D, E, etc. Based on these inputs, Algo A takes actions (e.g., enters or cancels orders), and Algos B, C, D, E, etc., react. Algo A reacts to those reactions, and on and on.
These feedbacks can be non-linear. Furthermore, the dimensionality of this problem is immense. Basically, an algo says if the state of the market is X, do Y. Evaluating algos in toto, the state of the market can include the current and past order books of every product, as well as the past order books (both explicitly as a condition in some algorithms, or implicitly through the empirical analysis that the developers use to find profitable trading rules based on historical market information), as well as market news. This state changes continuously.
Given this dimensionality and feedback-driven complexity, evaluating trading algorithms in isolation is a fools errand. Stability depends on how the algorithms interact. You cannot determine the stability of an emergent order, or its vulnerability to disruption, by looking at the individual components.
And since humans are still part of the trading ecosystem, how software interacts with meatware matters too. Fat finger problems are one example, but just normal human reactions to market developments can be destabilizing. This is true when all of the actors are human: it’s also true when some are human and some are algorithmic.
Look at the Flash Crash. Even in retrospect it has proven impossible to establish definitively the chain of events that precipitated it and caused it to unfold the way that it did. How is it possible to evaluate prospectively the stability of a system under a vastly larger set of possible states than those that existed on the day of the Flash Crash?
These considerations mean that the CFTC–or any regulator–has little ability to improve system stability even if given access to the complete details of important parts of that system. But it’s potentially worse than that. Ill-advised changes to pieces of the system can make it less stable.
This is because in complex systems, attempts to improve the safety of individual components of the system can actually increase the probability of system failure.
In sum, markets are complex systems/emergent orders. The effects of changes to parts of these systems are highly unpredictable. Furthermore, it is difficult, and arguably impossible, to predict how changes to individual pieces of the system will affect the behavior of the system as a whole under all possible contingencies, especially given the vastness of the set of contingencies.
Based on this reality, we should be very chary about letting any regulator attempt to micromanage pieces of this complex system. Indeed, any regulator should be reluctant to undertake this task. But regulators frequently overestimate their competence, and financial regulators have proven time and again that they really don’t understand that they are dealing with a complex system/emergent order that does not respond to their interventions in the way that they intend. But fools rush in where angels fear to tread, and if the Commission persists in its efforts to become the Commissar of Code, it will be playing the fool–and it will not just be algo traders that pay the price.