Eccleisiastes on HFT
The Bank of England’s Andrew Haldane gave a speech on high frequency trading that highlighted its risks. He cites evidence that various measures of persistence in stock prices have risen since 2005, and attributes this to the advent of HFT. Maybe. But many other things have changed in the same time period, so giving privileged status to HFT as the cause is quite a stretch. Short of a natural or controlled experiment (e.g., randomly assigning stocks that can and cannot be traded using HFT), attributing broad changes in pricing behavior to any single factor is highly conjectural.
His argument rests on the assertion that market making is more fickle during times of stress due to HFT. I would say that market making is more fickle during times of stress, full stop. Liquidity has always dried up during crashes and crises, no matter what the technology. The floors in Chicago were depopulated during the ’87 Crash. Many locals–liquidity suppliers–who didn’t leave voluntarily were pulled off the floors by their clearing firms. And many those that remained stood with their hands concealing their badges lest somebody try to win the outtrade lottery by claiming that he had made a profitable trade. NASDAQ market makers didn’t answer their phones during the Crash, and many specialists evaded their affirmative obligations to make markets (so much for one of Haldane’s suggested fixes). No HFT, but no liquidity and a crash all the same.
I wrote about a lot of the proposals that Haldane mooted in the immediate aftermath of the Flash Crash. Market making obligations are hard to enforce, as Haldane notes. The argument for them, as I laid out last year, is that they could affect the time pattern of liquidity supply. Such constraints necessarily reduce the return on capital, which will lead to an exit of market making capacity. This will raise costs in tranquil times, in the hope that the obligations imposed on market makers will lead a larger proportion of a smaller population of market makers to remain active during turbulent times, thereby improving liquidity during those times. Given the strong incentives to evade the obligations during turbulent times, and the difficulties of specifying and enforcing what those obligations are due to the high dimensionality of the services that market makers perform, this is quite a gamble: in more technical economic jargon, market making behavior is not contractible. There is no guarantee that the greater participation rate will more than offset the loss in capacity, leading to a more desirable allocation of liquidity supply between stable and turbulent times.
Haldane also plumped for minimum order times. Despite David of Deus Ex Machiatto’s best efforts, I remain wholly unconvinced of the wisdom of this. Ditto Eric Hunsader of Nanex’s very nice and thoughtful comment. David presents a calculation of the option value, and hence the cost, of mandating a one-second quote-in-force rule. The calibration technique is a natural one, but I am skeptical of its applicability. The price of an option with a maturity of a month that David is using to calibrate is based on private and public information flows over a period of days, which is fundamentally different than the market makers’ pricing problem, which is driven by private information flows over very short time scales. The whole point about market microstructure theory is that price behavior is fundamentally different at small time scales, so in my view it is unlikely that a one month option can be scaled down to value an option granted on a far shorter time scale.
But quibbles about calibration and scaling aside, this begs a huge question: if it is so cheap to grant options under all market conditions, why doesn’t competition ensure that said options are in fact supplied at their allegedly negligible cost (fractions of a basis point, per DEM)? After all, competition–as a graph in Haldane’s presentation shows–has driven down spreads to very low levels in normal times. The wholesale exit of market making capacity during turbulent times–in many cases, the refusal to trade at any price–indicates that it is very costly, and for some prohibitively costly, to grant options under these circumstances. It is a cost problem, and this cost problem is endemic to financial trading, regardless of the technology.
It has long been known that situations of extreme adverse selection–a problem completely abstracted from in the option calibration problem, by the way–can lead to a breakdown of trading, or at the very least, a sharp reduction in trading activity. When lemons risk becomes too great, markets freeze up or shut down.
An old paper by Glosten in the Journal of Business suggests that this is a rationale for having a designated market maker or specialist: in his model, a monopoly specialist market is less vulnerable to breakdown than a competitive one. But that’s a different solution than mandating quoting by competitive market participants. In the Glosten model, competitive markets are subject to breakdown under conditions of extreme information asymmetry. That’s true regardless of the technology.
Glosten’s paper, I might add, was published in 1985, before HFT was a gleam in a geek’s eye. There is indeed little new under the sun, and the focus on technological novelty obscures the enduring nature of the fundamental economic problems of liquidity supply.
Indeed, I believe it to be almost certain that imposing a restriction on quoting times will not improve liquidity, and will usually make it worse. Imposing a constraint increases costs, and increasing costs invariably reduces supply. A quoting time rule says: “If you quote, your quote must be in force for x seconds.” The “if” is the key thing here: a market maker can avoid the constraint by not quoting at all, and if the cost of extending the x-second option becomes too large due to severe lemons problems, that’s exactly what he will do–or will program his machine to do. Not exactly the best way to induce liquidity supply.
So a quoting-time rule alone is extremely counterproductive. Hence Haldane’s musing about market making obligations–which are noncontractible and hence not effectively enforceable, and hence not much use either.
In other words, if you want to encourage liquidity supply in the worst way, imposing constraints is that way!
Going back to basics, if you think that there is a market failure (e.g., a periodically occurring severe lemons problem), a natural way to address that failure is to subsidize what is being undersupplied, not to tax it. Just how to subsidize is not obvious, but one thought would be to widen tick sizes.
Another more productive approach would be to constrain the kinds of trading strategies that can be destabilizing, most notably stop orders. The CME’s stop order logic clearly helped put the brakes on the Emini Flash Crash. Extending that logic to other markets would be salutary.
This is an area where fragmentation and the associated difficulties of coordination across linked platforms raises difficulties not present in a CLOB, as David emphasized. But that’s where regulators could play a constructive coordinating function. Haldane and other regulators have discussed coordinated circuit breakers–a coordinated stop order logic would be an excellent place to start.
Free associating somewhat (for that’s what blogs are for!), I can envision a way to make such limits forward looking, rather than backward looking. The Easley-O’Hara-Prado order flow toxicity measure could help identify times in which market breakdowns are more likely, and this could be used to trigger constraints on stop orders. All circuit breakers under consideration are conditional on price movements, which often involves shutting the barn door after the horse has bolted. Something forward looking based on real time monitoring of quoting behavior and the adverse selection risk of order flow would be preferable. This would put computing technologies to work reducing the risk of flash crashes.
I also think it is worthwhile to explore measures directed at parasitic types of trading. In particular I am still convinced that pricing of exchange capacity–e.g., charging prices for submitting and canceling orders, where said prices vary with system usage–is a promising avenue to explore.
One final note. During his talk, Haldane mentions the importance of “moving from analysing market microstructure to market macrostructure.” Hear! Hear! That’s what I’ve been on about for the last 15 plus years.
I claim credit for coining the term “macrostructure” in this context: maybe somebody used it first, but if so I wasn’t aware of it in ’98 or ’99 when I started describing my research using that term.* I have a paper in the JLEO from 2002 titled “Securities Market Macrostructure,” and the title of my next book will be “Financial Market Macrostructure.” I actually had to fight the editorial board at Cambridge UP to get them to sign on to that title; they wanted a more plain-spoken title and thought that “macrostructure” was not commonly enough used in economics to fit the bill. Glad I prevailed–and glad Mr. Haldane is popularizing the phrase.
As I use it, macrostructure modeling involves deriving the implications of microstructural forces for the organization of securities and derivatives markets–the number of exchanges; how trading is divided between exchanges and OTC markets; fragmentation of trading; how exchanges are organized and governed, etc. My perhaps heterodox views on fragmentation, dark markets, etc., are firmly rooted in this way of analyzing market structure. Maybe Mr. Haldane and others often disagree with those heterodox views, but it is encouraging to see that the idea of building an understanding of market structure and regulation on firm microstructure foundations is gaining acceptance.
* I’d be interested to hear from anyone who can provide an example of the term “macrostructure” used in this way (which I describe more fully in the text below) that pre-dates 1998 or so, when the word popped into my head. I wouldn’t want to deny the actual innovator the props he or she deserves.
Is this switch to allow yuan-denominated futures on the CME a BFD? Is the CME part of ‘government by waiver’ (and raising position limits whenever a commodity gets too uppity, though harder to do that with crude trading in London and elsewhere?)
http://www.zerohedge.com/article/another-nail-dollars-coffin-cme-launching-renminbi-futures-august-22
Maybe it has something to do with soybeans. MN and IN had good crops this year, and China was the biggest export market besides traditional trading partners Japan, South Africa and…Iraq?
Comment by Mr. X — July 10, 2011 @ 9:09 pm
A few comments on minimum quote-life, if I may.
As the life-time of a quote approaches zero, the arguments for/against a minimum quote-life becomes more and more interesting. Let’s suppose, for example, that the day has arrived where a few of the top HFT systems are able to send and cancel quotes in 1 nanosecond [ns] (light travels 30cm or about 1 foot in 1 ns). At this rate, one could expect to occasionally see 1 billion quotes/sec per stock. OK, extreme example. Let’s slow it down by a factor of 1,000 — replace nanosecond (ns) with microsecond (us). Now we could expect a top rate of 1 million quotes/sec per stock. Still extreme? Down another factor of 1,000 — replace microsecond (us) with millisecond (ms). Now we could expect a top rate of 1,000 quotes/sec per stock — we surpassed this rate in 2009 and are speeding towards microseconds.
Perhaps the definition of a “quote” is what needs to be modernized or we need to change the name of what used to be called a quote. Not long ago, when a trader (or his auto-trading software) received a quote marked auto-execute, he had a reasonable expectation of being able to “hit” that quote — the only real exception being that another trader might beat him to it. Today, under the same circumstances, the same “auto-execute” quote, would almost certainly have expired or been replaced many times during the same time period.
The change in semantics becomes most important when we look at the definition of the NBBO (National Best Bid/Offer). Per Reg NMS, the NBBO is defined to mean the best bid/offer received by the security information processor (CQS, UQDF). But “no one”, as one exchange official pointed out to me, uses CQS or UQDF for that anymore.
So what do they use?
Each exchange computes the NBBO internally from their direct connections to other exchanges. As the speed of trading increases, the likelihood of two exchanges having the same NBBO decreases. Most of this is because of the pesky speed-of-light limitation.
So how does a trader know whether a trade was routed properly to the exchange with the best price? He doesn’t. It is impossible. You see, each exchange’s view of the other exchange prices only exists in the memory of one of its machines. It is not recorded. There is no audit trail. Sure, each exchange provides book-level data, but that only includes prices for that exchange — not what prices existed at the other exchanges at the time of each order.
If we are going to allow machines to trade faster and faster, then at the very minimum, we should require that each exchange provide audit trail data, which must include that exchange’s view of all top of the book price/size changes for the other exchanges trading a stock. In other words, what now only exists in an exchange routing computer’s RAM, needs to be captured and made available.
Eric Hunsader
Nanex
Comment by Eric Hunsader — July 11, 2011 @ 5:33 am