Streetwise Professor

October 23, 2015

Massad’s Recent Speech: Flashy, But Misleading, and Beside the Point

Filed under: Derivatives,Economics,HFT,Regulation — The Professor @ 8:53 pm

The other day CFTC Chairman Timothy Massad gave a speech about “flash events” in futures markets that has attracted a lot of attention. Most of the attention was given to Massad’s claim that there had been 35 flash events in WTI futures this year, and between 9 and 25 events per year combined in corn, crude, e-minis, 30 year Treasuries, gold, and the Euro from 2010-2014. Flashy results indeed. But the method for identifying them is misleading, and makes big flash moves seem more likely than they really are.

These results, and specifically the WTI finding for 2015, is an artifact of the definition of a flash event (which Massad acknowledged is somewhat arbitrary):

[E]pisodes in which the price of a contract moved at least 200 basis points within a trading hour— but returned to within 75 basis points of the original or starting price within that same hour.

The problem is that the number of flash events will depend on volatility.  Two percent moves are more likely in high volatility environments, or for high volatility contracts.

This is clearly what’s going on in oil. As this chart of the oil volatility index (OVX) shows, oil volatility was extremely low through most of 2014, but increased sharply in late-2014 through mid-2015, and then has picked up again in recent months:

Screen Shot 2015-10-23 at 7.45.39 PM

With volatility in the 60-70 percent annualized range, you will have a much greater likelihood of a 200 basis point move (and a subsequent 125 bp or so reversal) than with 15 percent vols. The flashy 2015 crude oil results are a reflection of this year’s high underlying volatility, which has been fundamentals driven, rather than the microstructure of modern electronic markets.

The 200/75 basis point standard was chosen because that’s what happened in the Treasury market on 15 October, 2014. But a 200 basis point move in something like Treasuries, which have a volatility of around 10 percent, is a bigger number of standard deviation move than a 200 basis point move in crude, especially with a volatility of 70. So the more appropriate cutoff would have been standard deviations (sigmas) rather than percent. But if Massad had done that, he would have identified a lot fewer events, and his speech would have been met with yawns, rather than the attention it has received.

Let’s also put things in perspective. The contracts considered trade 17-23 hours per day. 252 days a year times (say) 20 hour per day times 6 contracts and 20 events/year gives the odds of a .06 percent of an event in any hour. Using a more realistic sigma standard would reduce the odds of an event comparable to the Treasury flash event to a much smaller number than that.

Put differently, the Treasury event was truly anomalous, and Massad’s way of analyzing the data makes it seem more common than it really is. To get a flashy, eye-catching result, Massad had to use a misleading standard to identify flash events. Objects in his mirror are smaller than they appear.

The taking off point for Massad’s speech was the report on the 2014 Treasury flash crash. Like the infamous May, 2010 equity flash crash, there was a sharp decline in liquidity leading up to the price break. Massad attributes this to the way algorithms are programmed:

We also know that as with humans, the modern algorithms have risk management capabilities embedded within them. So when there is a moment of sudden, unexpected volatility, it may not be surprising that some in the market pull back – potentially faster than a human can.

The report describes how on October 15, some algos pulled back by widening their spreads and others reduced the size of their trading interest. Whether such dynamics can further increase volatility in an already volatile period is a question worth asking, but a difficult one to answer. It is also very difficult for individual institutions of any type to remain in the book, opposing price headwinds, or worse, to try and catch the proverbial falling knife. For many, this decision can be the difference between risk mitigation and significant losses. Contrary to what some have suggested,

This makes perfect sense. Some algorithms-especially HFT algorithms-attempt to determine when order flow is becoming toxic (and hence adverse selection risks are elevated) and reduce exposures when they do. Holding depth constant, greater information flow makes prices more volatile, and the reduction in liquidity that the greater information flow causes makes prices even more volatile.

This means that looking at the depth reductions and associated increases in volatility focuses on a symptom, not the underlying cause. What deserves more attention is what causes the increase in the informativeness of order flow that makes the liquidity suppliers cut back. This hasn’t been done in any study, to my knowledge, nor is it likely to be possible to do so.

And as Massad notes, this phenomenon is not unique to electronic markets. Meat puppet market makers also take a powder when adverse selection risks rise:

Contrary to what some have suggested, I suspect it was difficult for market makers in the pre-electronic era to routinely maintain tight and deep spreads during volatile conditions. They likely took long coffee breaks.*

It’s beyond suspicion, actually. It happens. Look at the Crash of ’87 when locals fled the pits and OTC market makers stopped answering their phones.

These reductions in liquidity are inherent in any trading environment where private information is important, and the rate of information flow varies.  Regardless of trading technology or market microstructure, liquidity suppliers will cut the sizes of their quotes, or stop quoting altogether, when order flow turns very toxic.

Given all this, Massad’s policy prescriptions are oddly disconnected from the flash phenomenon that prompted his talk:

The focus of our forthcoming proposals will be on the automation of order origination, transmission and execution – and the risks that may arise from such activity. These risks can come about due to malfunctioning algorithms, inadequate testing of algos, errors and similar problems. We are concerned about the potential for disruptive events and whether there are adequate measures to ensure effective compliance with risk controls and other requirements.

Now of course, you could have errors before, in the days of pit traders and specialists. You could have failures of systems in less sophisticated times. But generally the consequences were of lesser magnitude than what we may face today. And that’s in large part because the errors were easier to identify, arrest or cure before they caused widespread damage.

I expect that our proposals will include requirements for pre-trade risk controls and other measures with respect to automated trading. These will apply regardless of whether the automated trading is high or low frequency. We will not attempt to define high-frequency trading specifically. I expect that we will propose controls at the exchange level, and also at the clearing member and trading firm level.


That’s all great, but really beside the point. If rogue or fat-fingered algos were the problems in any of the alleged flash events Massad identified (including the Treasury event of a year ago), he would have been able to say so. But he admits that the causes of the various events are all unknown. So it’s a bait-and-switch to pose the problem of flash crashes, and then advance remedies that have nothing to do with them. It’s the regulatory equivalent of applying leeches.

In sum, Massad overstates the flash event problem, and offers policies that have nothing to do with them. The fact remains that these things are probably beyond a policy fix anyways. They inhere in nature of the trading of financial instruments when order flow can become toxic.

*Gillian Tett of the FT gets Massad’s point exactly backwards:

The crucial point is that these automated trading programs — like Hal — lack human judgment. When a crisis erupts and prices churn, computers do not simply “take a long coffee break”, as Mr Massad says, and wait for common sense to return; instead they tend to accelerate trading, fuelling those flash crash swings.

Sheesh. Please read, Gillian. Massad’s point is that the algos do take a metaphorical coffee break. They don’t speed up, they pull back.



Print Friendly

October 10, 2015

Igor Gensler Helps the Wicked Witch of the West Wing Create Son of Frankendodd

Hillary Clinton has announced her program to reform Wall Street. Again.

The actual author of the plan is said to be my old buddy, GiGi: Gary Gensler.

Gensler, if you will recall, was the Igor to Dr. Frankendodd, the loyal assistant who did the hard work to bring the monster to life. Now he is teaming with the Wicked Witch of the West Wing to create Son of Frankendodd.

There are a few reasonable things in the proposal. A risk charge on bigger, more complex institutions makes sense, although the details are devilish.

But for the most part, it is ill-conceived, as one would expect from Gensler.

For instance, it proposes regulating haircuts on repo loans. As I said frequently in the 2009-2010 period, attempting to impose these sorts of requirements on heterogeneous transactions is a form of price control that will lead some risks to be underpriced and some risks to be overpriced. This will create distorted incentives that are likely to increase risks and misallocations, rather than reduce them.

A tax on HFT has received the most attention:

The growth of high-frequency trading (HFT) has unnecessarily burdened our markets and enabled unfair and abusive trading strategies that often capitalize on a “two-tiered” market structure with obsolete rules. That’s why Clinton would impose a tax targeted specifically at harmful HFT. In particular, the tax would hit HFT strategies involving excessive levels of order cancellations, which make our markets less stable and less fair.

This is completely wrongheaded. HFT has not “burdened” our markets. It has been a form of creative destruction that has made traditional intermediaries obsolete, and in so doing has dramatically reduced trading costs. Yes, a baroque market structure in equities has created opportunities for rent seeking by HFT firms, but that was created by regulations, RegNMS in particular. So why not fix the rules (which in Hillary and Gensler acknowledge are problematic) rather than kneecap those who are responding to the incentives the rules create?

Furthermore, the particular remedy proposed here is completely idiotic. “Excessive levels of order cancellations.” Just who is capable of determining what is “excessive”? Furthermore, the ability to cancel orders rapidly is exactly what allows HFT to supply liquidity cheaply, because it limits their vulnerability to adverse selection. High rates of order cancellation are a feature, not a bug, in market making.

It is particularly ironic that Hillary pitches this as a matter of protecting “everyday investors.” FFS, “everyday investors” trading in small quantities are the ones who have gained most from the HFT-caused narrowing of bid-ask spreads.

Hillary also targets dark pools, another target of popular ignorance. Dark pools reduce trading costs for institutional investors, many of whom are investing the money of “everyday” people.

The proposal also gives Gensler an opportunity to ride one of his hobby horses, the Swaps Pushout Rule. This is another inane idea that is completely at odds with its purported purpose. It breaks netting sets and if anything makes the financial system more complex, and certainly makes financial institutions more complex. It also discriminates against commodities and increases the costs of managing commodity price risk.

The most bizarre part of the proposal would require financial institutions to demonstrate to regulators that they can be managed effectively.

Require firms that are too large and too risky to be managed effectively to reorganize, downsize, or break apart. The complexity and scope of many of the largest financial institutions can create risks for our economy by increasing both the likelihood that firms will fail and the economic damage that such failures can cause.[xiv] That’s why, as President, Clinton would pursue legislation that enhances regulators’ authorities under Dodd-Frank to ensure that no financial institution is too large and too risky to manage. Large financial firms would need to demonstrate to regulators that they can be managed effectively, with appropriate accountability across all of their activities. If firms can’t be managed effectively, regulators would have the explicit statutory authorization to require that they reorganize, downsize, or break apart. And Clinton would appoint regulators who would use both these new authorities and the substantial authorities they already have to hold firms accountable.

Just how would you demonstrate this? What would be the criteria? Why should we believe that regulators have the knowledge or expertise to make these judgments?

I have a Modest Proposal of my own. How about a rule that requires legislators and regulators to demonstrate that they have the competence to manage entire sectors of the economy, and in particular, have the competence to understand, let alone manage, an extraordinarily complex emergent order like the financial system? If some firms are too complex to manage, isn’t an ecosystem consisting of many such firms interacting in highly non-linear ways exponentially more complex to control, especially through the cumbersome process of legislation and regulation? Shouldn’t regulators demonstrate they are up to the task?

But of course Gensler and his ilk believe that they are somehow superior to those who manage financial firms. They are oblivious to the Knowledge Problem, and can see the speck in every banker’s eye, but don’t notice the log in their own.

People like Gensler and Hillary, who are so hubristic to presume that they can design and regulate the complex financial system, are by far the biggest systemic risk. Frankendodd was bad enough, but Son of Frankendodd looks to be an even worse horror show, and is almost guaranteed to be so if Gensler is the one in charge, as he clearly aims to be.

Print Friendly

July 15, 2015

The Joint Report on the Treasury Spike: Unanswered Questions, and You Can’t Stand in the Same River Twice

Filed under: Derivatives,Economics,HFT,Regulation — The Professor @ 11:39 am

The Treasury, Fed (Board of Governors and NYFed), SEC, and CFTC released a joint report on the short-lived spike in Treasury prices on 15 October, 2014. The report does a credible job laying out what happened, based on a deep dive into the high frequency data. But it does not answer the most interesting questions.

One thing of note, which shouldn’t really need mentioning, but does, is the report’s documentation of the diversity of algorithmic/high frequency trading carried out by what the report refers to as PTFs, or proprietary trading firms. This diversity is illustrated by the fact that these firms were both the largest passive suppliers of liquidity and the largest aggressive takers of liquidity during the October “event.” Indeed, the report documents the diversity within individual PTFs: there was considerable “self-trading,” whereby a particular PTF was on both sides of a trade. Meaning presumably that these PTFs had both aggressive and passive algos working simultaneously. So talking about “HFT” as some single, homogeneous thing is radically oversimplistic and misleading.

But let’s cut to the chase: Whodunnit? The report’s answer?: It’s complicated. The report says there was no single cause (e.g., a fat finger problem or whale trader).

This should not be surprising. In emergent orders, which financial markets are, large changes can occur in response to small (and indeed, very small) shocks: these systems can go non-linear. Complex feedbacks make attribution of cause impossible.  Although there is much chin-pulling (both in the report, and more generally) about the impact of technology and changes in market structure, the fundamental sources of feedback, and the types of participants in the ecosystem, are largely independent of technology.

Insofar as the events of 15 October are concerned, the report documents a substantial decline in market depth on both the futures market, and the main cash Treasury platforms (BrokerTec and eSpeed) in the hour following the release of the retail sales report. The decline in depth was due to PTFs reducing the size (but not the price) of their limit orders, and banks/dealers widening their quotes. Then, starting about 0930, there was a substantial order imbalance to the buy side on the futures: this initial order imbalance was driven primarily by banks/dealers. About 3 minutes later, aggressive PTFs kicked in on the buy side on both futures and the cash platforms.  Buying pressure peaked around 0939, and then both aggressive PTFs and the banks/dealers switched to the sell side. Prices rose when aggressors bought, and fell when they sold.

None of this is particularly surprising, but the report begs the most important questions. In particular, what caused the acute decline in depth in the hour leading up to the big price movement, and what triggered the surge in buy orders?

The first conjecture that comes to mind is related to informed trading and adverse selection. For some reason, PTFs (or more accurately, their algos) in particular apparently detected an increase in the toxicity of order flow, or observed some other information that implied that adverse selection risk was increasing, and they reduced their quote sizes to reduce the risk of being picked off.

Did order flow become more toxic in the roughly hour-long period following the release of the retail number? The report does not investigate that issue, which is unfortunate. Since liquidity declines were also marked in the minutes before the Flash Crash, it is imperative to have a better understanding of what drives these declines. There are metrics of toxicity (i.e., order flow informativeness). Liquidity suppliers (including HFT) monitor it in real time.  Understanding these events requires an analysis of whether variations in toxicity drive variations in liquidity, and in particular marked declines in depth.

Private information could also explain a surge in order imbalances. Those with private information would be the aggressors on the side of the net imbalance. In this case, the first indication of an imbalance is in the futures, and comes from the banks and asset managers. PTF net buying kicks in a few minutes later, suggesting they were extracting information from the banks’ and asset managers’ trading.

This raises the question: what was the private information, and what was the source of that information?

One problem with the asymmetric information story is the rapid reversal of the price movement. Informed trades have persistent effects. I’ve even seen in the data from some episodes that arguably manipulative (and hence uninformed) trades that could not be identified as such had persistent price impacts. So did new information arrive that led the buyers to start selling?

A potentially more problematic explanation of events (and I am just throwing out a hypothesis here) is that increased order flow toxicity due to informed trading eroded liquidity, and this created the conditions in which pernicious algorithms could thrive. For instance, momentum triggering (and momentum following) algorithms could have a bigger impact when the market lacks depth, as then smallish imbalances can move prices substantially, which then triggers trend following. When prices get sufficiently out of line, these algos might turn off or switch directions, or other contrarian algorithms might kick in.

These questions cannot be answered without knowing the algorithms, on both the passive and aggressive sides. What information did they have, and how did they react to it? Right now, we are just seeing their shadows. To understand the full chronology here–the decline in depth/liquidity, the surge in order imbalances from banks/dealers around 0930, the following surge in aggressive PTF buying, and the reversal in signed net order flow–it is necessary to understand in detail the entire algo ecosystem. We obviously don’t understand it, and likely never will.

Even if it was possible to go back and get a granular understanding of the algorithms and their interactions, this would be of limited utility going forward because the emergent ecosystem evolves continuously and rapidly. Indeed, no doubt the PTFs and banks carried out their own forensic analyses of the events of 15 October, and changed their algorithms accordingly. This means that even if we knew the  causal connections and feedbacks that produced the abrupt movement and reversal in Treasury prices, that knowledge will not really permit anticipation of future episodes, as the event itself will have changed the system, its connections, and its feedbacks. Further, independent of the effect of 15 October, the system will have evolved in the past 9 months. Given the dependence of the behavior of such systems on their very fine details, the system will behave differently today than it did then.

In sum, the joint report provides some useful information on what happened on 15 October, 2014, but it leaves the most important questions unanswered. What’s more, the answers regarding this one event would likely be only modestly informative going forward because that very event likely caused the system to change. Pace Heraclitus, when it comes to financial markets, “You cannot step twice into the same river; for other waters are continually flowing in.”




Print Friendly

April 24, 2015

A Matter of Magnitudes: Making Matterhorn Out of a Molehill

Filed under: Derivatives,Economics,HFT,Politics,Regulation — The Professor @ 10:47 am

The CFTC released its civil complaint in the Sarao case yesterday, along with the affidavit of Cal-Berkeley’s Terrence Hendershott. Hendershott’s report makes for startling reading. Rather than supporting the lurid claims that Sarao’s actions had a large impact on E Mini prices, and indeed contributed to the Flash Crash, the very small price impacts that Hendershott quantifies undermine these claims.

In one analysis, Hendershott calculates the average return in a five second interval following the observation of an order book imbalance. (I have problems with this analysis because it aggregates all orders up to 10 price levels on each side of the book, rather than focusing on away-from-the market orders, but leave that aside for a moment.) For the biggest order imbalances-over 3000 contracts on the sell side, over 5000 on the buy side-the return impact is on the order of .06 basis points. Point zero six basis points. A basis point is one-one-hundredth of a percent, so we are talking about 6 ten-thousandths of one percent. On the day of the Flash Crash, the E Mini was trading around 1165. A .06 basis point return impact therefore translates into a price impact of .007, which is one-thirty-fifth of a tick. And that’s the biggest impact, mind you.

To put the comparison another way, during the Flash Crash, prices plunged about 9 percent, that is, 900 basis points. Hendershott’s biggest measured impact is therefore 4 orders of magnitude smaller than the size of the Crash.

This analysis does not take into account the overall cumulative impact of the entry of an away-from-the market order, nor does it account for the fact that orders can affect prices, prices can affect orders, and orders can affect orders. To address these issues, Hendershott carried out a vector autoregression (VAR) analysis. He estimates the cumulative impact of an order at levels 4-7 of the book, accounting for direct and indirect impacts, through an examination of the impulse response function (IRF) generated by the estimated VAR.* He estimates that the entry of a limit order to sell 1000 contracts at levels 4-7 “has a price impact of roughly .3 basis points.”

Point 3 basis points. Three one-thousandths of one percent. Given a price of 1165, this is a price impact of .035, or about one-seventh of a tick.

Note further that the DOJ, the CFTC, and Hendershott all state that Sarao see-sawed back and forth, turning the algorithm on and off, and that turning off the algorithm caused prices to rebound by approximately the same amount as turning it on caused prices to fall. So, as I conjectured originally, his activity-even based on the government’s theory and evidence-did not bias prices upwards or downwards systematically.

This is directly contrary to the consistent insinuation throughout the criminal and civil complaints that Sarao was driving down prices. For example, the criminal complaint states that during the period of time that Sarao was using the algorithm “the E-Mini price fell by 361 [price] basis points” (which corresponds to a negative return of about 31 basis points). This is two orders of magnitude bigger than the impact calculated based on Hendershott’s .3 return basis point estimate even assuming that the algorithm was working only one way during this interval.

Further, Sarao was buying and selling in about equal quantities. So based on the theory and evidence advanced by the government, Sarao was causing oscillations in the price of a magnitude of a fraction of a tick, even though the complaints repeatedly suggest his algorithm depressed prices. To the extent he made money, he was making it by trading large volumes and earning a small profit on each trade that he might have enhanced slightly by layering, not by having a big unidirectional impact on prices as the government alleges.

The small magnitudes are a big deal, given the way the complaints are written, in particular the insinuations that Sarao helped cause the Flash Crash. The magnitudes of market price movements dwarf the impacts that the CFTC’s own outside expert calculates. And the small magnitudes raise serious questions about the propriety of bringing such serious charges.

Hendershott repeatedly says his results are “statistically significant.” Maybe he should read Deirdre McCloskey’s evisceration of the Cult of Statistical Significance. It’s economic significance that matters, and his results are economically miniscule, compared to the impact alleged. Hendershott has a huge sample size, which can make even trivial economic impacts statistically significant. But it is the economic significance that is relevant. On this, Hendershott is completely silent.

The CFTC complaint has a section labeled “Example of the Layering Algorithm Causing an Artificial Price.” I read with interest, looking for, you know, actual evidence and stuff. There was none. Zero. Zip. There is no analysis of the market price at all. None! This is a piece of the other assertions of price artificiality, including most notably the effect of the activity on the Flash Crash: a series of conclusory statements either backed by no evidence, or evidence (in the form of the Hendershott affidavit) that demonstrates how laughable the assertions are.

CFTC enforcement routinely whines at the burdens it faces proving artificiality, causation and intent in a manipulation case. Here they have taken on a huge burden and are running a serious risk of getting hammered in court. I’ve already addressed the artificiality issue, so consider causation for a moment. If CFTC dares to try to prove that Sarao caused-or even contributed to-the Crash, it will face huge obstacles. Yes, as Chris Clearfield and James Weatherall rightly point out, financial markets are emergent, highly interconnected and tightly coupled. This creates non-linearities: small changes in initial conditions can lead to huge changes in the state of the system. A butterfly flapping its wings in the Amazon can cause a hurricane in the Gulf of Mexico: but tell me, exactly, which of the billions of butterflies in the Amazon caused a particular storm? And note, that it is the nature of these systems that changing the butterfly’s position slightly (or changing the position of other butterflies) can result in a completely different outcome (because such systems are highly sensitive to initial conditions). There were many actors in the markets on 6 May, 2010. Attributing the huge change in the system to the behavior of any one individual is clearly impossible. As a matter of theory, yes, it is possible that given the state of the system on 6 May that activity that Sarao undertook with no adverse consequences on myriad other days caused the market to crash on that particular day when it didn’t on other days: it is metaphysically impossible to prove it. The very nature of emergent orders makes it impossible to reverse engineer the cause out of the effect.

A few additional points.

I continue to be deeply disturbed by the “sample days” concept employed in the complaints and in Hendershott’s analysis. This smacks of cherry picking. Even if one uses a sample, it should be a random one. And yeah, right, it just so happened that the Flash Crash day and the two preceding days turned up in a random sample. Pure chance! This further feeds suspicions of cherry picking, and opportunistic and sensationalist cherry picking at that.

Further, Hendershott (in paragraph 22 of his affidavit) asserts that there was a statistically significant price decline after Sarao turned on the algorithm, and a statistically significant price increase when he turned it off. But he presents no numbers, whereas he does report impacts of non-Sarao-specific activity elsewhere in the affidavit. This is highly suspicious. Is he too embarrassed to report the magnitude? This is a major omission, because it is the impact of Sarao’s activity, not offering away from the market generally, that is at issue here.

Relatedly, why not run a VAR (and the associated IRF) using Sarao’s orders as one of the variables? After all, this is the variable of interest: what we want to know is how Sarao’s orders affected prices. Hendershott is implicitly imposing a restriction, namely, that Sarao’s orders have the same impact as other orders at the same level of the book. But that is testable.

Moreover, Hendershott’s concluding paragraph (paragraph 23) is incredibly weak, and smacks of post hoc, ergo propter hoc reasoning. He insinuates that Sarao contributed to the Crash, but oddly distances himself from responsibility for the claim, throwing it on regulators instead: “The layering algorithm contributed to the overall Order Book imbalances and market conditions that the regulators say led to the liquidity deterioration prior to the Flash Crash.” Uhm, Terrence, you are the expert here: it is incumbent on you to demonstrate that connection, using rigorous empirical methods.

In sum, the criminal and civil complaints make a Matterhorn out of a molehill, and a small molehill at that. And don’t take my word for it: take the “[declaration] under penalty of perjury” of the CFTC’s expert. This is a matter of magnitudes, and magnitudes matter. The CFTC’s own expert estimates very small impacts, and impacts that oscillate up and down with the activation and de-activation of the algorithm.

Yes, Sarao’s conduct was dodgy, clearly, and there is a colorable case that he did engage in spoofing and layering. But the disparity between the impact of his conduct as estimated by the government’s own expert and the legal consequences that could arise from his prosecution is so huge as to be outrageous.

Particularly so since over the years CFTC has responded to acts that have caused huge price distortions, and inflicted losses in nine and ten figures, with all of the situational awareness of Helen Keller. It is as if the enforcers see the world through a fun house mirror that grotesquely magnifies some things, and microscopically shrinks others.

In proceeding as they have, DOJ and the CFTC have set off a feeding frenzy that could have huge regulatory and political impacts that affect the exchanges, the markets, and all market participants. CFTC’s new anti-manipulation authority permits it to sanction reckless conduct. If it was held to that standard, the Sarao prosecution would earn it a long stretch of hard time.

*Hendershott’s affidavit says that Exhibit 4 reports the IRF analysis, but it does not.


Print Friendly

April 22, 2015

Spoofing: Scalping Steroids?

Filed under: Derivatives,Economics,Exchanges,HFT,Regulation — The Professor @ 5:35 pm

The complaint against Sarao contains some interesting details. In particular, it reports his profits and quantities traded for nine days.

First, quantities bought and sold are almost always equal. That is characteristic of a scalper.

Second, for six of the days, he earned an average of .63 ticks per round turn. That is about profit that you’d expect a scalper to realize. Due to adverse selection, a market maker typically doesn’t earn the full quoted spread.  On only one of these days is the average profit per round turn more than a tick, and then just barely.

Third, there is one day (4 August, 2011) where he earned a whopping 19.6 ticks per round trip ($4 million profit on 16695 buy/sells). I find that hard to believe.

Fourth, there are two days that the government reports the profit but not the volume. One of these days is 6 May, 2010, the Flash Crash day. I find that omission highly suspicious, given that this is the most important day.

Fifth, I again find it odd, and potentially problematic for the government, that it charges him with fraud, manipulation, and spoofing on only 9 days when he allegedly used the layering strategy on about 250 days. How did the government establish that trading on some days was illegal, and on other days it wasn’t?

The most logical explanation of all this is that Sarao was basically scalping-market making-and if he spoofed, he did so to enhance the profitability of this activity, either by scaring off competition at the inside market, or inducing a greater flow of market orders, or both.

One implication of this is that scalping does not tend to cause prices to move one direction or the other. It is passive, and balances buys and sells. This will present great difficulties in pursuing the manipulation charges, though not the spoofing charges and perhaps not the fraud charges.


Print Friendly

Did Spoofing Cause the Flash Crash? Not So Fast!

Filed under: Derivatives,Economics,HFT,Regulation — The Professor @ 12:41 pm

The United States has filed criminal charges against on Navinder Sarao, of London, for manipulation via “spoofing” (in the form of “layering”) and “flashing.” The most attention-grabbing aspect of the complaint is that Sarao engaged in this activity on 6 May, 2010-the day of the Flash Crash. Journalists have run wild with this allegation, concluding that he caused the Crash.

Sarao’s layering strategy involved placement of sell orders at various levels more than two ticks away from the best offer. At his request, “Trading Software Company #1” (I am dying to know who that would be) created an algorithm implemented in a spreadsheet that would cancel these orders if the inside market got close to these resting offers, and replace them with new orders multiple levels away from the new inside market. The algorithm would also cancel orders if the depth in the book at better prices fell below a certain level. Similarly, if the market moved away from his resting orders, those orders would be cancelled and reenetered at the designated distances from the new inside market level.

The complaint is mystifying on the issue of how Sarao made money (allegedly $40 million dollars between 2010 and 2014). To make money, you need to buy low, sell high (you read it here first!), which requires actual transactions. And although the complaint details how many contracts Sarao traded and how many trades (e.g., 10682 buys totaling 74380 lots and 8959 sells totaling 74380 lots on 5 May, 2010-big numbers), it doesn’t say how the trades were executed and what Sarao’s execution strategy was.

The complaint goes into great detail regarding the allegedly fraudulent orders that were never executed, it is maddeningly vague on the trades that were. It says only:

[W]hile the dynamic layering technique exerted downward pressure on the market SARAO typically executed a series of trades to exploit his own manipulative activity by repeatedly selling futures  only to buy them back at a slightly lower price. Conversely, when the market mved back upward as a result of SARAO’s ceasing the dynamic layering technique, SARAO typically did the opposite, that is he repeatedly bought contracts only to sell them at a slightly higher price.

But how were these buys and sells executed? Market orders? Limit orders? Since crossing the spread is expensive, I seriously doubt he used market orders: even if the strategy drove down both bids and offers, using aggressive orders would have forced Sarao to pay the spread, making it impossible to profit. What was the sequence? The complaint suggests that he sold (bought) after driving the price down (up). This seems weird: it would make more sense to do the reverse.

In previous cases, Moncada and Coscia (well-summarized here), the scheme allegedly worked by placing limit orders on both sides of the market in unbalanced quantities, and see-sawing back and forth. For instance, the schemers would allegedly place a small buy order at the prevailing bid, and then put big away from the market orders on the offer side. Once the schemer’s bid was hit, the contra side orders would be cancelled, and he would then switch sides: entering a sell order at the inside market and large away-from-market buys. This strategy is best seen as a way of earning the spread. Presumably its intent is to increase the likelihood of execution of the at-the-market order by using the big contra orders to induce others with orders at the inside market to cancel or reprice. This allowed the alleged manipulators to earn the spread more often than they would have without using this “artifice.”

But we don’t have that detail in Sarao. The complaint does describe the “flashing” strategy in similar terms as in Moncada and Coscia, (i.e., entering limit orders on both sides of the market) but it does not describe the execution strategy in the layering scheme, which the complaint calls “the most prominent manipulative technique he used.”

If, as I conjecture, he was using something like Moncada and Coscia were alleged to have employed, it is difficult to see how his activities would have caused prices to move systematically one direction or the other as the government alleges. Aggressive orders tend to move the market, and if my conjecture is correct, Sarao was using passive orders. Further, he was buying and selling in almost (and sometimes exactly) equal quantities. Trading involving lots of cancellations plus trades in equal quantities at the bid and offer shares similarities with classic market making strategies. This should not move price systematically one way or the other.

But both with regards to the Flash Crash, and 4 May, 2010, the complaint insinuates that Sarao moved the price down:

As the graph displays, SARAO successfully modified nearly all of his orders to stay between levels 4 and 7 of the sell side of the order book. What is more, Exhibit A shows the overall decline in the market price of the E-Minis during this period.

But on 4 May, Sarao bought and sold the exact same number of contracts (65,015). How did that cause price to decline?

Attributing the Flash Crash to his activity is also highly problematic. It smacks of post hoc, ergo propter hoc reasoning. Or look at it this way. The complaint alleges that Sarao employed the layering strategy about 250 days, meaning that he caused 250 out of the last one flash crashes. I can see the defense strategy. When the government expert is on the stand, the defense will go through every day. “You claim Sarao used layering on this day, correct?” “Yes.” “There was no Flash Crash on that day, was there?” “No.” Repeating this 250 times will make the causal connection between his trading and Flash Clash seem very problematic, at best. Yes, perhaps the market was unduly vulnerable to dislocation in response to layering on 6 May, 2010, and hence his strategy might have been the straw that broke the camels back, but that is a very, very, very hard case to make given the very complex conditions on that day.

There is also the issue of who this conduct harmed. Presumably HFTs were the target. But how did it harm them? If my conjecture about the strategy is correct, it increased the odds that Sarao earned the spread, and reduced the odds that HFTs earned the spread. Alternatively, it might have induced some people (HFTs, or others) to submit market orders that they wouldn’t have submitted otherwise. Further, HFT strategies are dynamic, and HFTs learn. One puzzle is why away from the market orders would be considered informative, particularly if they are used frequently in a fraudulent way (i.e., they do not communicate any information). HFTs mine huge amounts of data to detect patterns. The complaint alleges Sarao engaged in a pronounced pattern of trading that certainly HFTs would have picked up, especially since allegations of layering have been around ever since the markets went electronic. This makes it likely that there was a natural self-correcting mechanism that would tend to undermine the profitability of any manipulative strategy.

There are also some interesting legal issues. The government charges Sarao under the pre-Dodd-Frank Section 7 (anti-manipulation) of the Commodity Exchange Act. Proving this manipulation claim requires proof of price artificiality, causation, and intent. The customized software might make the intent easy to prove in this case. But price artificiality and causation will be real challenges, particularly if Sarao’s strategy was similar to Moncada’s and Coscia’s. Proving causation in the Flash Crash will be particularly challenging, given the complex circumstances of that day, and the fact that the government has already laid the blame elsewhere, namely on the Wardell-Reed trades. Causation and artificiality arguments will also be difficult to make given that the government is charging him only for a handful of days that he used the strategy. One suspects some cherry-picking. Then, of course, there is the issue of whether the statute is Constitutionally vague. Coscia recently lost on that issue, but Radley won on it in Houston. It’s an open question.

I am less familiar with Section 18 fraud claims, or the burden of proof regarding them. Even under my conjecture, it is plausible that HFTs were defrauded from earning the spread, or that some traders paid the spread on trades they wouldn’t have made. But if causation is an element here, there will be challenges. It will require showing how HFTs (or other limit order traders) responded to the spoofing. That won’t be easy, especially since HFTs are unlikely to want to reveal their algorithms.

The spoofing charge is based on the post-Frankendodd CEA, with its lower burden of proof (recklessness not intent, and no necessity of proving an artificial price). That will be easier for the government to make stick. That gives the government considerable leverage. But it is largely unexplored territory: this is almost a case of first impression, or at least it is proceeding in parallel with other cases based on this claim, and so there are no precedents.

There are other issues here, including most notably the role of CME and the CFTC. I will cover those in a future post. Suffice it to say that this will be a complex and challenging case going forward, and the government is going to have to do a lot more explaining before it is possible to understand exactly what Sarao did and the impact he had.


Print Friendly

April 21, 2015

Gary Gensler Resurfaces as Hillary!’s CFO: Is He Our Next Treasury Secretary?

Filed under: HFT,Politics — The Professor @ 7:27 pm

At a couple of conferences recently, people asked me what Gary Gensler is up to? I said “I don’t know. It’s not like GiGi and I are buddies.” (True fact: he had me banned from the CFTC building.) Well, now we all know what he’s up to: Gensler has landed as the CFO of Hillary’s presidential campaign.

When Gensler was CFTC chair, I surmised he had ambitions to replace Timmy! as Secretary of the Treasury. But that went to a Rubinoid, Jack Lew. There was also talk of Gensler running for the Senate from Maryland, and Mikulski has announced her retirement, but more well-known Dem pols in the state are poised to run, so that’s not an option.

Taking the campaign CFO job probably does give Gensler an inside track on the coveted SecTreas job. If Hillary wins. If.

Yes, I know she is the odds on favorite. But she was shopping for Oval Office curtains in 2008, and we know how that turned out.

Hillary’s problem is, well, Hillary. A lot of people like the idea of Hilllary. It’s the real person that is the problem.

This has been illustrated by her slow-motion-train-wreck of a campaign kickoff. There’s an old expression: if you can fake sincerity, you have it made. Hillary hasn’t quite mastered that yet. The launch and the comically contrived “spontaneous” road trip to Iowa were about as authentic as Velveeta. It was a remarkable act of will, because you can just tell how much Hillary hates to be with actual people. Further, she has operated in a bubble, protected by some Harry Potteresque charm that repels all serious questions from serious people.

Eventually, though, her personality will shine through. And that’s the problem. Playing word association, if you say “Hillary”, I say: shrill, angry, bitter, entitled, strident, rigid, ideological, dishonest, hyper-partisan, vengeful, arrogant, paranoid, and . . . I could go on. And on. And on. And she’s not that bright: whoever calls her “the smartest woman in the world” is a virulent misogynist, with an obviously low opinion of women. I on the other had, think so highly of women that I would prefer to select the next president by lot from America’s 150 million or so adult females, than by an election in which Hillary is the Democratic Party standard bearer. 150 million-to-one: I’ll take those odds over better than even any day.

She is also an awful politician. She has no political instincts whatsoever. You can see the gears grinding behind her phony grin, trying to figure out what would be the politically advantageous thing to say. Today’s persona is Class Warrior. She recently said the one percenters must be “toppled.” Actually, I could kinda go for that, because despite her past protestations of being as poor as a church mouse, she is definitely in that class now.

In other words, she’s no Bill, who was if nothing else, a natural politician that had a magnetism and suppleness that could overcome his other deficiencies.

Which brings up another issue: the psychodrama between Hillary and Bill. You would think that Bill is a major asset, but I wonder. She wants to win on her own, and has put up with decades of humiliation from him to advance her ambitions: will she put herself in a position where she has to accept his help to win? Nor are Bill’s incentives unmixed. Will he want to play second fiddle as the first First Husband? Hillary’s campaign in 2008 was a soap opera: will 2016 be any different?

Then there’s the old baggage, which Hillary has more of than the lost and found at JFK. (I contributed, in a modest way, to that collection, many years ago, as detailed in the Senate Whitewater Report and the Congressional Record.) It is quite a remarkable record, stretching into the distant past, when she was fired from the Watergate Committee staff, to Arkansas skullduggery, to various White House scandals, to her service as Secretary of State (Benghazi, preventing naming Boko Haram as a terrorist organization, the Reset), to the very present (the stench of cronyism and influence peddling at the Clinton Foundation, and the Immaculate Abortion of her private email server).

Further, she’s not getting any younger, and it shows.

So she has many liabilities. What about the assets? They are formidable, particularly a national media that may not like her, but hates Republicans more. They can be counted on to avoid criticizing her, to form a defensive phalanx around her, and to attack her Republican adversary relentlessly. That didn’t help her in the primaries in 2008, when the fickle press found someone even more attractive. But there is no Barack Obama on offer in 2015-2016.

She also has a relentless fundraising machine, a reliable and experienced party and campaign apparatus, union support, and a solid base who would vote for Godzilla over a Republican.

Thus, she has great institutional advantages that will go far in overcoming her severe personal deficiencies.

But her biggest asset is that you can’t beat somebody with nobody, and right now the Republicans are offering up national nobodies. Maybe a somebody will emerge, but I wouldn’t count on it.

All meaning that although Hillary is a flawed person, and a flawed candidate, she has many advantages. So, as much as it pains me to say so, GiGi’s wish may come true. And as bad as a Gensler Treasury would be, it pains me even more to say that it likely would be one of the best parts of a Hillary Clinton Administration.

Print Friendly

March 1, 2015

The Clayton Rule on Speed

Filed under: Commodities,Derivatives,Economics,Exchanges,HFT,Politics,Regulation — The Professor @ 1:12 pm

I have written often of the Clayton Rule of Manipulation, named after a cotton broker who, in testimony before Congress, uttered these wise words:

“The word ‘manipulation’ . . . in its use is so broad as to include any operation of the cotton market that does not suit the gentleman who is speaking at the moment.”

High Frequency Trading has created the possibility of the promiscuous application of the Clayton Rule, because there is a lot of things about HFT that do not suit a lot of gentlemen at this moment, and a lot of ladies for that matter. The CFTC’s Frankendodd-based Disruptive Practices Rule, plus the fraud based manipulation Rule 180.1 (also a product of Dodd-Frank) provide the agency’s enforcement staff with the tools to pursue a pretty much anything that does not suit them at any particular moment.

At present, the thing that least suits government enforcers-including not just CFTC but the Department of Justice as well-is spoofing. As I discussed late last year, the DOJ has filed criminal charges in a spoofing case.

Here’s my description of spoofing:

What is spoofing? It’s the futures market equivalent of Lucy and the football. A trader submits buy (sell) orders above (below) the inside market in the hope that this convinces other market participants that there is strong demand (supply) for (of) the futures contract. If others are so fooled, they will raise their bids (lower their offers). Right before they do this, the spoofer pulls his orders just like Lucy pulls the football away from Charlie Brown, and then hits (lifts) the higher (lower) bids (offers). If the pre-spoof prices are “right”, the post-spoof bids (offers) are too high (too low), which means the spoofer sells high and buys low.

Order cancellation is a crucial component of the spoofing strategy, and this has created widespread suspicion about the legitimacy of order cancellation generally. Whatever you think about spoofing, if such futures market rule enforcers (exchanges, the CFTC, or the dreaded DOJ) begin to believe that traders who cancel orders at a high rate are doing something nefarious, and begin applying the Clayton Rule to such traders, the potential for mischief-and far worse-is great.

Many legitimate strategies involve high rates of order cancellation. In particular, market making strategies, including market making strategies pursued by HFT firms, typically involve high cancellation rates, especially in markets with small ticks, narrow spreads, and high volatility. Market makers can quote tighter spreads if they can adjust their quotes rapidly in response to new information. High volatility essentially means a high rate of information flow, and a need to adjust quotes frequently. Moreover, HFT traders can condition their quotes in a given market based on information (e.g., trades or quote changes) in other markets. Thus, to be able to quote tight markets in these conditions, market makers need to be able to adjust quotes frequently, and this in turn requires frequent order cancellations.

Order cancellation is also a means of protecting market making HFTs from being picked off by traders with better information. HFTs attempt to identify when order flow becomes “toxic” (i.e., is characterized by a large proportion of better-informed traders) and rationally cancel orders when this occurs. This reduces the cost of making markets.

This creates a considerable tension if order cancellation rates are used as a metric to detect potential manipulative conduct. Tweaking strategies to reduce cancellation rates to reduce the probability of getting caught in an enforcement dragnet increases the frequency that a trader is picked off and thereby raises trading costs: the rational response is to quote less aggressively, which reduces market liquidity. But not doing so raises the risk of a torturous investigation, or worse.

What’s more, the complexity of HFT strategies will make ex post forensic analyses of traders’ activities fraught with potential error. There is likely to be a high rate of false positives-the identification of legitimate strategies as manipulative. This is particularly true for firms that trade intensively in multiple markets. With some frequency, such firms will quote one side of the market, cancel, and then take liquidity from the other side of the market (the pattern that is symptomatic of spoofing). They will do that because that can be the rational response to some patterns of information arrival. But try explaining that to a suspicious regulator.

The problem here inheres in large part in the inductive nature of legal reasoning, which generalizes from specific cases and relies heavily on analogy. With such reasoning there is always a danger that a necessary condition (“all spoofing strategies involve high rates of order cancellation”) morphs into a sufficient condition (“high rates of order cancellation indicate manipulation”). This danger is particularly acute in complex environments in which subtle differences in strategies that are difficult for laymen to grasp (and may even be difficult for the strategist or experts to explain) can lead to very different conclusions about their legitimacy.

The potential for a regulatory dragnet directed against spoofing catching legitimate strategies by mistake is probably the greatest near-term concern that traders should have, because such a dragnet is underway. But the widespread misunderstanding and suspicion of HFT more generally means that over the medium to long term, the scope of the Clayton Rule may expand dramatically.

This is particularly worrisome given that suspected offenders are at risk to criminal charges. This dramatic escalation in the stakes raises compliance costs because every inquiry, even from an exchange, demands a fully-lawyered response. Moreover, it will make firms avoid some perfectly rational strategies that reduce the costs of making markets, thereby reducing liquidity and inflating trading costs for everyone.

The vagueness of the statute and the regulations that derive from it pose a huge risk to HFT firms. The only saving grace is that this vagueness may result in the law being declared unconstitutional and preventing it from being used in criminal prosecutions.

Although he wrote in a non-official capacity, an article by CFTC attorney Gregory Scopino illustrates how expansive regulators may become in their criminalization of HFT strategies. In a Connecticut Law Review article, Scopino questions the legality of “high-speed ‘pinging’ and ‘front running’ in futures markets.” It’s frightening to watch him stretch the concepts of fraud and “deceptive contrivance or device” to cover a variety of defensible practices which he seems not to understand.

In particular, he is very exercised by “pinging”, that is, the submission of small orders in an attempt to detect large orders. As remarkable as it might sound, his understanding of this seems to be even more limited than Michael Lewis’s: see Peter Kovac’s demolition of Lewis in his Not so Fast.

When there is hidden liquidity (due to non-displayed orders or iceberg orders), it makes perfect sense for traders to attempt to learn about market depth. This can be valuable information for liquidity providers, who get to know about competitive conditions in the market and can gauge better the potential profitability of supply ing liquidity. It can also be valuable to informed strategic traders, whose optimal trading strategy depends on market depth (as Pete Kyle showed more than 30 years ago): see a nice paper by Clark-Joseph on such “exploratory trading”, which sadly has been misrepresented by many (including Lewis and Scopino) to mean that HFT firms front run, a conclusion that Clark-Joseph explicitly denies. To call either of these strategies front running, or deem them deceptive or fraudulent is disturbing, to say the least.

Scopino and other critics of HFT also criticize the alleged practice of order anticipation, whereby a trader infers the existence of a large order being executed in pieces as soon as the first pieces trade. I say alleged, because as Kovac points out, the noisiness of order flow sharply limits the ability to detect a large latent order on the basis of a few trades.

What’s more, as I wrote in some posts on HFT just about a year ago, and in a piece in the Journal of Applied Corporate Finance, it’s by no means clear that order anticipation is inefficient, due to the equivocal nature of informed trading. Informed trading reduces liquidity, making it particularly perverse that Scopino wants to treat order anticipation as a form of insider trading (i.e., trading on non-public information). Talk about getting things totally backwards: this would criminalize a type of trading that actually impedes liquidity-reducing informed trading. Maybe there’s a planet on which that makes sense, but its sky ain’t blue.

Fortunately, these are now just gleams in an ambitious attorney’s eye. But from such gleams often come regulatory progeny. Indeed, since there is a strong and vocal constituency to impede HFT, the political economy of regulation tends to favor such an outcome. Regulators gonna regulate, especially when importuned by interested parties. Look no further than the net neutrality debacle.

In sum, the Clayton Rule has been around for the good part of a century, but I fear we ain’t seen nothing yet. HFT doesn’t suit a lot of people, often because of ignorance or self-interest, and as Mr. Clayton observed so long ago, it’s a short step from that to an accusation of manipulation. Regulators armed with broad, vague, and elastic authority (and things don’t get much broader, vaguer, or more elastic than “deceptive contrivance or device”) pose a great danger of running amok and impairing market performance in the name of improving it.

Print Friendly

January 25, 2015

From Birth to Adulthood in a Few Short Years: HFT’s Predictable Convergence to Competitive Normalcy

Filed under: Commodities,Derivatives,Economics,Exchanges,HFT — The Professor @ 2:05 pm

Once upon a time, high frequency trading-HFT-was viewed to be a juggernaut, a money-making machine that would have Wall Street and LaSalle Street in its thrall. These dire predictions were based on the remarkable growth in HFT in 2009 and 2010 in particular, but the narrative outlived the heady growth.

In fact, HFT has followed the trajectory of any technological innovation in a highly competitive environment. At its inception, it was a dramatically innovative way of performing longstanding functions undertaken by intermediaries in financial markets: market making and arbitrage. It did so much more efficiently than incumbents did, and so rapidly it displaced the old-style intermediaries. During this transitional period, the first-movers earned supernormal profits because of cost and speed advantages over the old school intermediaries. HFT market share expanded dramatically, and the profits attracted expansion in the capital and capacity of the first-movers, and the entry of new firms. And as day follows night, this entry of new HFT capacity and the intensification of competition dissipated these profits. This is basic economics in action.

According to the Tabb Group, HFT profits declined from $7 billion in 2009 to only $1.3 billion today. Moreover, HFT market share in both has declined from its peak of 61 percent in equities in 2009 (to 48.4 percent today) and 64 percent in futures in 2011 (to 60 percent today). The profit decline and topping out of market share are both symptomatic of sector settling down into a steady state of normal competitive profits and growth commensurate with the increase in the size of the overall market in the aftermath of a technological shock. Fittingly, this convergence in the HFT sector has been notable for its rapidity, with the transition from birth to adulthood occurring within a mere handful of years.

A little perspective is in order too. Equity market volume in the US is on the order of $100 billion per day. HFT profits now represent on the order of 1/250th of one percent of equity turnover. Since HFT profits include profits from derivatives, their share of turnover of everything they trade overall is smaller still, meaning that although they trade a lot, their margins are razor thin. This is another sign of a highly competitive market.

We are now witnessing further evidence of the maturation of HFT. There is a pronounced trend to consolidation, with HFT pioneer Allston Trading exiting the market, and DRW purchasing Chopper Trading. Such consolidation is a normal phase in the evolution of a sector that has experienced a technological shock. Expect to see more departures and acquisitions as the industry (again predictably) turns its focus to cost containment as competition means that the days of easy money are fading in the rearview mirror.

It’s interesting in this context to think about Schumpeter’s argument in Capitalism, Socialism, and Democracy.  One motivation for the book was to examine whether there was, as Marx and earlier classical economists predicted, a tendency for profit to diminish to zero (where costs of capital are included in determining economic profit).  That may be true in a totally static setting, but as Schumpeter noted the development of new, disruptive technologies overturns these results.  The process of creative destruction can result in the introduction of a sequence of new technologies or products that displace the old, earn large profits for a while, but are then either displaced by new disruptive technologies, or see profits vanish due to classical/neoclassical competitive forces.

Whether it is by the entry of a new destructively creative technology, or the inexorable forces of entry and expansion in a technologically static setting, one expects profits earned by firms in one wave of creative destruction to decline.  That’s what we’re seeing in HFT.  It was definitely a disruptive technology that reaped substantial profits at the time of its introduction, but those profits are eroding.

That shouldn’t be a surprise.  But it no doubt is to many of those who have made apocalyptic predictions about the machines taking over the earth.  Or the markets, anyways.

Or, as Herb Stein famously said as a caution against extrapolating from current trends, “If something cannot go on forever, it will stop.” Those making dire predictions about HFT were largely extrapolating from the events of 2008-2010, and ignored the natural economic forces that constrain growth and dissipate profits. HFT is now a normal, competitive business earning normal, competitive profits.  And hopefully this reality will eventually sink in, and the hysteria surrounding HFT will fade away just as its profits did.

Print Friendly

July 21, 2014

Doing Due Diligence in the Dark

Filed under: Exchanges,HFT,Regulation — The Professor @ 8:39 pm

Scott Patterson, WSJ reporter and the author of Dark Pools, has a piece in today’s journal about the Barclays LX story. He finds, lo and behold, that several users of the pool had determined that they were getting poor executions:

Trading firms and employees raised concerns about high-speed traders at Barclays PLC’s dark pool months before the New York attorney general alleged in June that the firm lied to clients about the extent of predatory trading activity on the electronic trading venue, according to people familiar with the firms.

Some big trading outfits noticed their orders weren’t getting the best treatment on the dark pool, said people familiar with the trading. The firms began to grow concerned that the poor results resulted from high-frequency trading, the people said.

In response, at least two firms—RBC Capital Markets and T. Rowe Price Group Inc —boosted the minimum number of shares they would trade on the dark pool, letting them dodge high-speed traders, who often trade in small chunks of 100 or 200 shares, the people said.

This relates directly to a point that I made in my post on the Barclays story. Trading is an experience good. Dark pool customers can evaluate the quality of their executions. If a pool is not screening out opportunistic traders, execution costs will be high relative to other venues who do a better job of screening, and users who monitor their execution costs will detect this. Regardless of what a dark pool operator says about what it is doing, the proof of the pudding is in the trading, as it were.

The Patterson article shows that at least some buy side firms do the necessary analysis, and can detect a pool that does not exclude toxic flows.

This long FT piece relies extensively on quotes from Hisander Misra, one of the founders of Chi-X, to argue that many fund managers have been ignorant of the quality of executions they get on dark pools. The article talked to two anonymous fund managers who say they don’t know how dark pools work.

The stated implication here is that regulation is needed to protect the buy side from unscrupulous pool operators.

A couple of comments. First, not knowing how a pool works doesn’t really matter. Measures of execution quality are what matter, and these can be measured. I don’t know all of the technical details of the operation of my car or the computer I am using, but I can evaluate their performances, and that’s what matters.

Second, this is really a cost-benefit issue. Monitoring of performance is costly. But so is regulation and litigation. Given that market participants have the biggest stake in measuring pool performance properly, and can develop more sophisticated metrics, there are strong arguments in favor of relying on monitoring.  Regulators can, perhaps, see whether a dark pool does what it advertises it will do, but this is often irrelevant because it does not necessarily correspond closely to pool execution costs, which is what really matters.

Interestingly, one of the things that got a major dark pool (Liquidnet) in trouble was that it shared information about the identities of existing clients with prospective clients. This presents interesting issues. Sharing such information could economize on monitoring costs. If a a big firm (like a T. Rowe) trades in a pool, this can signal to other potential users that the pool does a good job of screening out the opportunistic. This allows them to free ride off the monitoring efforts of the big firm, which economizes on monitoring costs.

Another illustration of how things are never simple and straightforward when analyzing market structure.

One last point. Some of the commentary I’ve read recently uses the prevalence of HFT volume in a dark pool as a proxy for how much opportunistic trading goes on in the pool. This is a very dangerous shortcut, because as I (and others) have written repeatedly, there is all different kinds of HFT. Some adds to liquidity, some consumes it, and some may be outright toxic/predatory. Market-making HFT can enhance dark pool liquidity, which is probably why dark pools encourage HFT participation. Indeed, it is hard to understand how a pool could benefit from encouraging the participation of predatory HFT, especially if it lets such firms trade for free. This drives away the paying customers, particularly when the paying customers evaluate the quality of their executions.

Evaluating execution quality and cost could be considered a form of institutional trader due diligence. Firms that do so can protect themselves-and their investor-clients-from opportunistic counterparties. Even though the executions are done in the dark, it is possible to shine a light on the results. The WSJ piece shows that many firms do just that. The question of whether additional regulation is needed boils down to the question of whether the cost and efficacy of these self-help efforts is superior to that of regulation.

Print Friendly

Next Page »

Powered by WordPress