Streetwise Professor

June 15, 2016

Where’s the CFTC’s Head AT?: Fools Rush in Where Angels Fear to Tread

Filed under: Commodities,Derivatives,Economics,Exchanges,Financial crisis,HFT,Regulation — The Professor @ 1:07 pm

The CFTC is currently considering Regulation AT (for Automated Trading). It is the Commission’s attempt to get a handle on HFT and algorithmic trading.

By far the most controversial aspect of the proposed regulation is the CFTC’s demand that algo traders provide the Commission with their source code. Given the sensitivity of this information, algo/HFT firms are understandably freaking out over this demand.

Those concerns are certainly legitimate. But what I want to ask is: what’s the point? What can the Commission actually accomplish?

The Commission argues that by reviewing source code, it can identify possible coding errors that could lead to “disruptive events” like the 2013 Knight Capital fiasco. Color me skeptical, for at least two reasons.

First, I seriously doubt that the CFTC can attract people with the coding skill necessary to track down errors in trading algorithms, or can devote the time necessary. Reviewing the code of others is a difficult task, usually harder than writing the code in the first place; the code involved here is very complex and changes frequently; and the CFTC is unlikely to be able devote the resources necessary for a truly effective review. Further, who has the stronger incentive? A firm that can be destroyed by a coding error, or some GS-something? (The prospect of numerous individuals perusing code creates the potential for a misappropriation of intellectual property which is what really has the industry exercised.) Not to mention that if you really have the chops to code trading algos, you’ll work for a prop shop or Citadel or Goldman or whomever and make much more than a government salary.

Second, and more substantively, reviewing individual trading algorithms in isolation is of limited value in determining their potentially disruptive effects. These individual algorithms are part of a complex system, in the technical/scientific meaning of the term. These individual pieces interact with one another, and create feedback mechanisms. Algo A takes inputs from market data that is produced in part by Algos B, C, D, E, etc. Based on these inputs, Algo A takes actions (e.g., enters or cancels orders), and Algos B, C, D, E, etc., react. Algo A reacts to those reactions, and on and on.

These feedbacks can be non-linear. Furthermore, the dimensionality of this problem is immense. Basically, an algo says if the state of the market is X, do Y. Evaluating algos in toto, the state of the market can include the current and past order books of every product, as well as the past order books (both explicitly as a condition in some algorithms, or implicitly through the empirical analysis that the developers use to find profitable trading rules based on historical market information), as well as market news. This state changes continuously.

Given this dimensionality and feedback-driven complexity, evaluating trading algorithms in isolation is a fools errand. Stability depends on how the algorithms interact. You cannot determine the stability of an emergent order, or its vulnerability to disruption, by looking at the individual components.

And since humans are still part of the trading ecosystem, how software interacts with meatware matters too. Fat finger problems are one example, but just normal human reactions to market developments can be destabilizing. This is true when all of the actors are human: it’s also true when some are human and some are algorithmic.

Look at the Flash Crash. Even in retrospect it has proven impossible to establish definitively the chain of events that precipitated it and caused it to unfold the way that it did. How is it possible to evaluate prospectively the stability of a system under a vastly larger set of possible states than those that existed on the day of the Flash Crash?

These considerations mean that  the CFTC–or any regulator–has little ability to improve system stability even if given access to the complete details of important parts of that system. But it’s potentially worse than that. Ill-advised changes to pieces of the system can make it less stable.

This is because in complex systems, attempts to improve the safety of individual components of the system can actually increase the probability of system failure.

In sum, markets are complex systems/emergent orders. The effects of changes to parts of these systems are highly unpredictable. Furthermore, it is difficult, and arguably impossible, to predict how changes to individual pieces of the system will affect the behavior of the system as a whole under all possible contingencies, especially given the vastness of the set of contingencies.

Based on this reality, we should be very chary about letting any regulator attempt to micromanage pieces of this complex system. Indeed, any regulator should be reluctant to undertake this task. But regulators frequently overestimate their competence, and financial regulators have proven time and again that they really don’t understand that they are dealing with a complex system/emergent order that does not respond to their interventions in the way that they intend. But fools rush in where angels fear to tread, and if the Commission persists in its efforts to become the Commissar of Code, it will be playing the fool–and it will not just be algo traders that pay the price.

Print Friendly

11 Comments »

  1. I would love to be forced to hand in my APL, or k/kdb/q, source and see exactly what the cftc thinks i was doing …. if they even got close I’d hire ’em away

    It once took me 4 hours to explain to a guy how I was (using what is nowadays called a “fold” but back in the day was called an “APL defined function reduce”) populate and evaluate a CRR tree for multiple strikes in parallel. … To one of the guys whose initials are in CRR, who was himself an APL programmer.

    Comment by TonyC — June 15, 2016 @ 4:39 pm

  2. Idiots.

    And, of course, we are constantly tweaking our algorithms, so the code is always changing.

    The average regulator can scarcely explain how a plain vanilla equity option works.

    Comment by Methinks — June 15, 2016 @ 4:56 pm

  3. There’s another potentially severe problem with this imbecilic plan.

    Government servers are notoriously easy to hack. Imagine the potential danger of centralizing code on government servers where hackers (who have a much better understanding of code than the CFTC) can easily steal it and use it to destabilize markets in ways previously unimagined.

    Comment by Methinks — June 15, 2016 @ 5:14 pm

  4. Huge over reach emboldened by Dodd/Frank. I don’t care what anyone thinks about electronic trading, this is wrong. It seems bureaucrats always solve problems backwards. Inconsistencies and disruptions in today’s market place are the result of legacy regulation and entrenched legacy market structure, not HFT.

    Comment by @pointsnfigures — June 17, 2016 @ 8:19 am

  5. This article is shamefully shortsighted. You are missing the whole point. If the code is written to enact trading strategies that violate CFTC rules, it wouldn’t take 10 years and millions of dollars to try cases on predatory trading. You’ve lost sight of the forest through the trees. Did you use any sources for this piece of propaganda other than Don Wilson and Ray Cahnman?

    Comment by KIR — June 17, 2016 @ 9:10 am

  6. Where should I start? With the absurd premise that a regulatory agency who lacks funding and capability should no longer attempt to exercise its authority?
    By this ridiculous logic, police forces would retreat from any situation in which they are out gunned
    Regulatory agencies are woefully inadequate because they cannot piece together these events after they happen. They need to prevent them from happening. This is only possible by identifying and removing programs designed to break existing rules.
    CFTC needs to get out of the punishment game and into the prevention game. It’s the only option when you have limited resources and capabilities. The article is shameful, irresponsible, and intellectually dishonest. A thousand degrees in economics do not change these facts.
    But what do I know? I pay out of my own pocket to travel to the DC and speak to the CFTC. These guys get paid to do it.

    Comment by KIR — June 17, 2016 @ 10:17 am

  7. Hi Craig,
    Great write up, I couldn’t agree more

    I linked to you and provided some more thoughts and elaboration here:
    https://blog.rethinkrisk.net/2016/06/17/reg-at-dont-go-there/

    “Knight’s problem wasn’t even a coding error. Knight’s code worked—it was just deployed incorrectly. If that sounds like splitting hairs, that’s precisely the point. These systems are so complicated that code divorced from configuration files and deployment procedures is essentially meaningless.”

    What drives me a little bit crazy is that there’s already an industry (and a regulator) that does a pretty good job managing these sorts of risks – aviation. https://blog.rethinkrisk.net/2015/01/06/preventing-crashes/

    Thanks,
    Chris

    Comment by Chris Clearfield — June 17, 2016 @ 1:35 pm

  8. @KIR. What? Are you Cartman? Saying I need to respect the Commission’s authori-TAH?

    1. Not gonna happen.
    2. You clearly don’t get it. First, as proposed Reg AT is not focused on manipulative code (i.e., “programs designed to break existing rules”). It is intended (inter alia) to prevent market disruptions arising from non-manipulative, but somehow defective, computer code that can cause market disruptions–the algorithmic equivalent of a fat finger error.

    Second, and more importantly, the whole point of my post is that prevention is impossible. Overlook for a moment whether it is efficient to spend the large amounts of money that could be required to hire the large number of highly skilled programmers to identify coding errors. As I show in the post, in a complex system/emergent order like a financial market looking at individual programs is not sufficient to identify potentially disruptive code because it is the interaction between myriad pieces of software and meatware that determines the stability of the system.

    And as I note in the post, the conceit that it is possible for regulators to impose “safety features” in software that is part of a complex system can create risks. Please get at least a superficial understanding of complex systems before venting.

    You truly miss the boat: if regulators can’t even piece together things after the fact, how can they possibly anticipate problems in a complex system where the number of relevant states is beyond immense?

    To reiterate the theme of the post: prevention is a fool’s errand, and a place where angels fear to tread.

    You are seriously ignorant about these issues, but that doesn’t stop you from bloviating. Stop it before you embarrass yourself further.

    The ProfessorComment by The Professor — June 17, 2016 @ 5:57 pm

  9. @KIR-Again, you mis-state the regulation. Reg AT is not aimed only at manipulative code: it is aimed at all code. It is not intended to prevent only manipulative trading software: it is intended to prevent all sorts of software-caused market disruptions. Have you even read the regulation?

    Insofar as prevention vs. deterrence of manipulation is concerned, I wrote extensively about this 20 years ago. Ex post deterrence is actually far more efficient in cases like manipulation, for a variety of reasons that I analyzed in my manipulation book. Read it and we’ll talk.

    And as for the source of this piece: I am perfectly capable of developing these arguments without anyone’s assistance, and indeed I did. And as for insinuating otherwise, and for asserting that I am spewing propaganda rather than advancing a serious intellectual argument, please take this in the spirit in which it is intended: fuck off.

    The ProfessorComment by The Professor — June 17, 2016 @ 6:10 pm

  10. @Chris Clearfield-Thanks much. I had you in mind when I wrote the post, actually. Glad you like it.

    A lot of things drive me crazy. Of course, it’s a short trip.

    The ProfessorComment by The Professor — June 17, 2016 @ 6:17 pm

  11. Just to make myself extremely unpopular here, just ban all of HFT. As regards the ‘real’ economy, it’s unproductive. I agree with the complexity argument, so removing all of it, creates a level playing field, which is what capitalism is supposed to be about, isn’t it? Incidentally, same for exotic financial derivatives and structured product which create huge systemic risk, mainly (if last time is any indication) to the public, not the players.

    Comment by Hugh Barnard — June 22, 2016 @ 9:01 am

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress