Streetwise Professor

April 24, 2015

A Matter of Magnitudes: Making Matterhorn Out of a Molehill

Filed under: Derivatives,Economics,HFT,Politics,Regulation — The Professor @ 10:47 am

The CFTC released its civil complaint in the Sarao case yesterday, along with the affidavit of Cal-Berkeley’s Terrence Hendershott. Hendershott’s report makes for startling reading. Rather than supporting the lurid claims that Sarao’s actions had a large impact on E Mini prices, and indeed contributed to the Flash Crash, the very small price impacts that Hendershott quantifies undermine these claims.

In one analysis, Hendershott calculates the average return in a five second interval following the observation of an order book imbalance. (I have problems with this analysis because it aggregates all orders up to 10 price levels on each side of the book, rather than focusing on away-from-the market orders, but leave that aside for a moment.) For the biggest order imbalances-over 3000 contracts on the sell side, over 5000 on the buy side-the return impact is on the order of .06 basis points. Point zero six basis points. A basis point is one-one-hundredth of a percent, so we are talking about 6 ten-thousandths of one percent. On the day of the Flash Crash, the E Mini was trading around 1165. A .06 basis point return impact therefore translates into a price impact of .007, which is one-thirty-fifth of a tick. And that’s the biggest impact, mind you.

To put the comparison another way, during the Flash Crash, prices plunged about 9 percent, that is, 900 basis points. Hendershott’s biggest measured impact is therefore 4 orders of magnitude smaller than the size of the Crash.

This analysis does not take into account the overall cumulative impact of the entry of an away-from-the market order, nor does it account for the fact that orders can affect prices, prices can affect orders, and orders can affect orders. To address these issues, Hendershott carried out a vector autoregression (VAR) analysis. He estimates the cumulative impact of an order at levels 4-7 of the book, accounting for direct and indirect impacts, through an examination of the impulse response function (IRF) generated by the estimated VAR.* He estimates that the entry of a limit order to sell 1000 contracts at levels 4-7 “has a price impact of roughly .3 basis points.”

Point 3 basis points. Three one-thousandths of one percent. Given a price of 1165, this is a price impact of .035, or about one-seventh of a tick.

Note further that the DOJ, the CFTC, and Hendershott all state that Sarao see-sawed back and forth, turning the algorithm on and off, and that turning off the algorithm caused prices to rebound by approximately the same amount as turning it on caused prices to fall. So, as I conjectured originally, his activity-even based on the government’s theory and evidence-did not bias prices upwards or downwards systematically.

This is directly contrary to the consistent insinuation throughout the criminal and civil complaints that Sarao was driving down prices. For example, the criminal complaint states that during the period of time that Sarao was using the algorithm “the E-Mini price fell by 361 [price] basis points” (which corresponds to a negative return of about 31 basis points). This is two orders of magnitude bigger than the impact calculated based on Hendershott’s .3 return basis point estimate even assuming that the algorithm was working only one way during this interval.

Further, Sarao was buying and selling in about equal quantities. So based on the theory and evidence advanced by the government, Sarao was causing oscillations in the price of a magnitude of a fraction of a tick, even though the complaints repeatedly suggest his algorithm depressed prices. To the extent he made money, he was making it by trading large volumes and earning a small profit on each trade that he might have enhanced slightly by layering, not by having a big unidirectional impact on prices as the government alleges.

The small magnitudes are a big deal, given the way the complaints are written, in particular the insinuations that Sarao helped cause the Flash Crash. The magnitudes of market price movements dwarf the impacts that the CFTC’s own outside expert calculates. And the small magnitudes raise serious questions about the propriety of bringing such serious charges.

Hendershott repeatedly says his results are “statistically significant.” Maybe he should read Deirdre McCloskey’s evisceration of the Cult of Statistical Significance. It’s economic significance that matters, and his results are economically miniscule, compared to the impact alleged. Hendershott has a huge sample size, which can make even trivial economic impacts statistically significant. But it is the economic significance that is relevant. On this, Hendershott is completely silent.

The CFTC complaint has a section labeled “Example of the Layering Algorithm Causing an Artificial Price.” I read with interest, looking for, you know, actual evidence and stuff. There was none. Zero. Zip. There is no analysis of the market price at all. None! This is a piece of the other assertions of price artificiality, including most notably the effect of the activity on the Flash Crash: a series of conclusory statements either backed by no evidence, or evidence (in the form of the Hendershott affidavit) that demonstrates how laughable the assertions are.

CFTC enforcement routinely whines at the burdens it faces proving artificiality, causation and intent in a manipulation case. Here they have taken on a huge burden and are running a serious risk of getting hammered in court. I’ve already addressed the artificiality issue, so consider causation for a moment. If CFTC dares to try to prove that Sarao caused-or even contributed to-the Crash, it will face huge obstacles. Yes, as Chris Clearfield and James Weatherall rightly point out, financial markets are emergent, highly interconnected and tightly coupled. This creates non-linearities: small changes in initial conditions can lead to huge changes in the state of the system. A butterfly flapping its wings in the Amazon can cause a hurricane in the Gulf of Mexico: but tell me, exactly, which of the billions of butterflies in the Amazon caused a particular storm? And note, that it is the nature of these systems that changing the butterfly’s position slightly (or changing the position of other butterflies) can result in a completely different outcome (because such systems are highly sensitive to initial conditions). There were many actors in the markets on 6 May, 2010. Attributing the huge change in the system to the behavior of any one individual is clearly impossible. As a matter of theory, yes, it is possible that given the state of the system on 6 May that activity that Sarao undertook with no adverse consequences on myriad other days caused the market to crash on that particular day when it didn’t on other days: it is metaphysically impossible to prove it. The very nature of emergent orders makes it impossible to reverse engineer the cause out of the effect.

A few additional points.

I continue to be deeply disturbed by the “sample days” concept employed in the complaints and in Hendershott’s analysis. This smacks of cherry picking. Even if one uses a sample, it should be a random one. And yeah, right, it just so happened that the Flash Crash day and the two preceding days turned up in a random sample. Pure chance! This further feeds suspicions of cherry picking, and opportunistic and sensationalist cherry picking at that.

Further, Hendershott (in paragraph 22 of his affidavit) asserts that there was a statistically significant price decline after Sarao turned on the algorithm, and a statistically significant price increase when he turned it off. But he presents no numbers, whereas he does report impacts of non-Sarao-specific activity elsewhere in the affidavit. This is highly suspicious. Is he too embarrassed to report the magnitude? This is a major omission, because it is the impact of Sarao’s activity, not offering away from the market generally, that is at issue here.

Relatedly, why not run a VAR (and the associated IRF) using Sarao’s orders as one of the variables? After all, this is the variable of interest: what we want to know is how Sarao’s orders affected prices. Hendershott is implicitly imposing a restriction, namely, that Sarao’s orders have the same impact as other orders at the same level of the book. But that is testable.

Moreover, Hendershott’s concluding paragraph (paragraph 23) is incredibly weak, and smacks of post hoc, ergo propter hoc reasoning. He insinuates that Sarao contributed to the Crash, but oddly distances himself from responsibility for the claim, throwing it on regulators instead: “The layering algorithm contributed to the overall Order Book imbalances and market conditions that the regulators say led to the liquidity deterioration prior to the Flash Crash.” Uhm, Terrence, you are the expert here: it is incumbent on you to demonstrate that connection, using rigorous empirical methods.

In sum, the criminal and civil complaints make a Matterhorn out of a molehill, and a small molehill at that. And don’t take my word for it: take the “[declaration] under penalty of perjury” of the CFTC’s expert. This is a matter of magnitudes, and magnitudes matter. The CFTC’s own expert estimates very small impacts, and impacts that oscillate up and down with the activation and de-activation of the algorithm.

Yes, Sarao’s conduct was dodgy, clearly, and there is a colorable case that he did engage in spoofing and layering. But the disparity between the impact of his conduct as estimated by the government’s own expert and the legal consequences that could arise from his prosecution is so huge as to be outrageous.

Particularly so since over the years CFTC has responded to acts that have caused huge price distortions, and inflicted losses in nine and ten figures, with all of the situational awareness of Helen Keller. It is as if the enforcers see the world through a fun house mirror that grotesquely magnifies some things, and microscopically shrinks others.

In proceeding as they have, DOJ and the CFTC have set off a feeding frenzy that could have huge regulatory and political impacts that affect the exchanges, the markets, and all market participants. CFTC’s new anti-manipulation authority permits it to sanction reckless conduct. If it was held to that standard, the Sarao prosecution would earn it a long stretch of hard time.

*Hendershott’s affidavit says that Exhibit 4 reports the IRF analysis, but it does not.


Print Friendly

April 22, 2015

Spoofing: Scalping Steroids?

Filed under: Derivatives,Economics,Exchanges,HFT,Regulation — The Professor @ 5:35 pm

The complaint against Sarao contains some interesting details. In particular, it reports his profits and quantities traded for nine days.

First, quantities bought and sold are almost always equal. That is characteristic of a scalper.

Second, for six of the days, he earned an average of .63 ticks per round turn. That is about profit that you’d expect a scalper to realize. Due to adverse selection, a market maker typically doesn’t earn the full quoted spread.  On only one of these days is the average profit per round turn more than a tick, and then just barely.

Third, there is one day (4 August, 2011) where he earned a whopping 19.6 ticks per round trip ($4 million profit on 16695 buy/sells). I find that hard to believe.

Fourth, there are two days that the government reports the profit but not the volume. One of these days is 6 May, 2010, the Flash Crash day. I find that omission highly suspicious, given that this is the most important day.

Fifth, I again find it odd, and potentially problematic for the government, that it charges him with fraud, manipulation, and spoofing on only 9 days when he allegedly used the layering strategy on about 250 days. How did the government establish that trading on some days was illegal, and on other days it wasn’t?

The most logical explanation of all this is that Sarao was basically scalping-market making-and if he spoofed, he did so to enhance the profitability of this activity, either by scaring off competition at the inside market, or inducing a greater flow of market orders, or both.

One implication of this is that scalping does not tend to cause prices to move one direction or the other. It is passive, and balances buys and sells. This will present great difficulties in pursuing the manipulation charges, though not the spoofing charges and perhaps not the fraud charges.


Print Friendly

Did Spoofing Cause the Flash Crash? Not So Fast!

Filed under: Derivatives,Economics,HFT,Regulation — The Professor @ 12:41 pm

The United States has filed criminal charges against on Navinder Sarao, of London, for manipulation via “spoofing” (in the form of “layering”) and “flashing.” The most attention-grabbing aspect of the complaint is that Sarao engaged in this activity on 6 May, 2010-the day of the Flash Crash. Journalists have run wild with this allegation, concluding that he caused the Crash.

Sarao’s layering strategy involved placement of sell orders at various levels more than two ticks away from the best offer. At his request, “Trading Software Company #1″ (I am dying to know who that would be) created an algorithm implemented in a spreadsheet that would cancel these orders if the inside market got close to these resting offers, and replace them with new orders multiple levels away from the new inside market. The algorithm would also cancel orders if the depth in the book at better prices fell below a certain level. Similarly, if the market moved away from his resting orders, those orders would be cancelled and reenetered at the designated distances from the new inside market level.

The complaint is mystifying on the issue of how Sarao made money (allegedly $40 million dollars between 2010 and 2014). To make money, you need to buy low, sell high (you read it here first!), which requires actual transactions. And although the complaint details how many contracts Sarao traded and how many trades (e.g., 10682 buys totaling 74380 lots and 8959 sells totaling 74380 lots on 5 May, 2010-big numbers), it doesn’t say how the trades were executed and what Sarao’s execution strategy was.

The complaint goes into great detail regarding the allegedly fraudulent orders that were never executed, it is maddeningly vague on the trades that were. It says only:

[W]hile the dynamic layering technique exerted downward pressure on the market SARAO typically executed a series of trades to exploit his own manipulative activity by repeatedly selling futures  only to buy them back at a slightly lower price. Conversely, when the market mved back upward as a result of SARAO’s ceasing the dynamic layering technique, SARAO typically did the opposite, that is he repeatedly bought contracts only to sell them at a slightly higher price.

But how were these buys and sells executed? Market orders? Limit orders? Since crossing the spread is expensive, I seriously doubt he used market orders: even if the strategy drove down both bids and offers, using aggressive orders would have forced Sarao to pay the spread, making it impossible to profit. What was the sequence? The complaint suggests that he sold (bought) after driving the price down (up). This seems weird: it would make more sense to do the reverse.

In previous cases, Moncada and Coscia (well-summarized here), the scheme allegedly worked by placing limit orders on both sides of the market in unbalanced quantities, and see-sawing back and forth. For instance, the schemers would allegedly place a small buy order at the prevailing bid, and then put big away from the market orders on the offer side. Once the schemer’s bid was hit, the contra side orders would be cancelled, and he would then switch sides: entering a sell order at the inside market and large away-from-market buys. This strategy is best seen as a way of earning the spread. Presumably its intent is to increase the likelihood of execution of the at-the-market order by using the big contra orders to induce others with orders at the inside market to cancel or reprice. This allowed the alleged manipulators to earn the spread more often than they would have without using this “artifice.”

But we don’t have that detail in Sarao. The complaint does describe the “flashing” strategy in similar terms as in Moncada and Coscia, (i.e., entering limit orders on both sides of the market) but it does not describe the execution strategy in the layering scheme, which the complaint calls “the most prominent manipulative technique he used.”

If, as I conjecture, he was using something like Moncada and Coscia were alleged to have employed, it is difficult to see how his activities would have caused prices to move systematically one direction or the other as the government alleges. Aggressive orders tend to move the market, and if my conjecture is correct, Sarao was using passive orders. Further, he was buying and selling in almost (and sometimes exactly) equal quantities. Trading involving lots of cancellations plus trades in equal quantities at the bid and offer shares similarities with classic market making strategies. This should not move price systematically one way or the other.

But both with regards to the Flash Crash, and 4 May, 2010, the complaint insinuates that Sarao moved the price down:

As the graph displays, SARAO successfully modified nearly all of his orders to stay between levels 4 and 7 of the sell side of the order book. What is more, Exhibit A shows the overall decline in the market price of the E-Minis during this period.

But on 4 May, Sarao bought and sold the exact same number of contracts (65,015). How did that cause price to decline?

Attributing the Flash Crash to his activity is also highly problematic. It smacks of post hoc, ergo propter hoc reasoning. Or look at it this way. The complaint alleges that Sarao employed the layering strategy about 250 days, meaning that he caused 250 out of the last one flash crashes. I can see the defense strategy. When the government expert is on the stand, the defense will go through every day. “You claim Sarao used layering on this day, correct?” “Yes.” “There was no Flash Crash on that day, was there?” “No.” Repeating this 250 times will make the causal connection between his trading and Flash Clash seem very problematic, at best. Yes, perhaps the market was unduly vulnerable to dislocation in response to layering on 6 May, 2010, and hence his strategy might have been the straw that broke the camels back, but that is a very, very, very hard case to make given the very complex conditions on that day.

There is also the issue of who this conduct harmed. Presumably HFTs were the target. But how did it harm them? If my conjecture about the strategy is correct, it increased the odds that Sarao earned the spread, and reduced the odds that HFTs earned the spread. Alternatively, it might have induced some people (HFTs, or others) to submit market orders that they wouldn’t have submitted otherwise. Further, HFT strategies are dynamic, and HFTs learn. One puzzle is why away from the market orders would be considered informative, particularly if they are used frequently in a fraudulent way (i.e., they do not communicate any information). HFTs mine huge amounts of data to detect patterns. The complaint alleges Sarao engaged in a pronounced pattern of trading that certainly HFTs would have picked up, especially since allegations of layering have been around ever since the markets went electronic. This makes it likely that there was a natural self-correcting mechanism that would tend to undermine the profitability of any manipulative strategy.

There are also some interesting legal issues. The government charges Sarao under the pre-Dodd-Frank Section 7 (anti-manipulation) of the Commodity Exchange Act. Proving this manipulation claim requires proof of price artificiality, causation, and intent. The customized software might make the intent easy to prove in this case. But price artificiality and causation will be real challenges, particularly if Sarao’s strategy was similar to Moncada’s and Coscia’s. Proving causation in the Flash Crash will be particularly challenging, given the complex circumstances of that day, and the fact that the government has already laid the blame elsewhere, namely on the Wardell-Reed trades. Causation and artificiality arguments will also be difficult to make given that the government is charging him only for a handful of days that he used the strategy. One suspects some cherry-picking. Then, of course, there is the issue of whether the statute is Constitutionally vague. Coscia recently lost on that issue, but Radley won on it in Houston. It’s an open question.

I am less familiar with Section 18 fraud claims, or the burden of proof regarding them. Even under my conjecture, it is plausible that HFTs were defrauded from earning the spread, or that some traders paid the spread on trades they wouldn’t have made. But if causation is an element here, there will be challenges. It will require showing how HFTs (or other limit order traders) responded to the spoofing. That won’t be easy, especially since HFTs are unlikely to want to reveal their algorithms.

The spoofing charge is based on the post-Frankendodd CEA, with its lower burden of proof (recklessness not intent, and no necessity of proving an artificial price). That will be easier for the government to make stick. That gives the government considerable leverage. But it is largely unexplored territory: this is almost a case of first impression, or at least it is proceeding in parallel with other cases based on this claim, and so there are no precedents.

There are other issues here, including most notably the role of CME and the CFTC. I will cover those in a future post. Suffice it to say that this will be a complex and challenging case going forward, and the government is going to have to do a lot more explaining before it is possible to understand exactly what Sarao did and the impact he had.


Print Friendly

April 21, 2015

Gary Gensler Resurfaces as Hillary!’s CFO: Is He Our Next Treasury Secretary?

Filed under: HFT,Politics — The Professor @ 7:27 pm

At a couple of conferences recently, people asked me what Gary Gensler is up to? I said “I don’t know. It’s not like GiGi and I are buddies.” (True fact: he had me banned from the CFTC building.) Well, now we all know what he’s up to: Gensler has landed as the CFO of Hillary’s presidential campaign.

When Gensler was CFTC chair, I surmised he had ambitions to replace Timmy! as Secretary of the Treasury. But that went to a Rubinoid, Jack Lew. There was also talk of Gensler running for the Senate from Maryland, and Mikulski has announced her retirement, but more well-known Dem pols in the state are poised to run, so that’s not an option.

Taking the campaign CFO job probably does give Gensler an inside track on the coveted SecTreas job. If Hillary wins. If.

Yes, I know she is the odds on favorite. But she was shopping for Oval Office curtains in 2008, and we know how that turned out.

Hillary’s problem is, well, Hillary. A lot of people like the idea of Hilllary. It’s the real person that is the problem.

This has been illustrated by her slow-motion-train-wreck of a campaign kickoff. There’s an old expression: if you can fake sincerity, you have it made. Hillary hasn’t quite mastered that yet. The launch and the comically contrived “spontaneous” road trip to Iowa were about as authentic as Velveeta. It was a remarkable act of will, because you can just tell how much Hillary hates to be with actual people. Further, she has operated in a bubble, protected by some Harry Potteresque charm that repels all serious questions from serious people.

Eventually, though, her personality will shine through. And that’s the problem. Playing word association, if you say “Hillary”, I say: shrill, angry, bitter, entitled, strident, rigid, ideological, dishonest, hyper-partisan, vengeful, arrogant, paranoid, and . . . I could go on. And on. And on. And she’s not that bright: whoever calls her “the smartest woman in the world” is a virulent misogynist, with an obviously low opinion of women. I on the other had, think so highly of women that I would prefer to select the next president by lot from America’s 150 million or so adult females, than by an election in which Hillary is the Democratic Party standard bearer. 150 million-to-one: I’ll take those odds over better than even any day.

She is also an awful politician. She has no political instincts whatsoever. You can see the gears grinding behind her phony grin, trying to figure out what would be the politically advantageous thing to say. Today’s persona is Class Warrior. She recently said the one percenters must be “toppled.” Actually, I could kinda go for that, because despite her past protestations of being as poor as a church mouse, she is definitely in that class now.

In other words, she’s no Bill, who was if nothing else, a natural politician that had a magnetism and suppleness that could overcome his other deficiencies.

Which brings up another issue: the psychodrama between Hillary and Bill. You would think that Bill is a major asset, but I wonder. She wants to win on her own, and has put up with decades of humiliation from him to advance her ambitions: will she put herself in a position where she has to accept his help to win? Nor are Bill’s incentives unmixed. Will he want to play second fiddle as the first First Husband? Hillary’s campaign in 2008 was a soap opera: will 2016 be any different?

Then there’s the old baggage, which Hillary has more of than the lost and found at JFK. (I contributed, in a modest way, to that collection, many years ago, as detailed in the Senate Whitewater Report and the Congressional Record.) It is quite a remarkable record, stretching into the distant past, when she was fired from the Watergate Committee staff, to Arkansas skullduggery, to various White House scandals, to her service as Secretary of State (Benghazi, preventing naming Boko Haram as a terrorist organization, the Reset), to the very present (the stench of cronyism and influence peddling at the Clinton Foundation, and the Immaculate Abortion of her private email server).

Further, she’s not getting any younger, and it shows.

So she has many liabilities. What about the assets? They are formidable, particularly a national media that may not like her, but hates Republicans more. They can be counted on to avoid criticizing her, to form a defensive phalanx around her, and to attack her Republican adversary relentlessly. That didn’t help her in the primaries in 2008, when the fickle press found someone even more attractive. But there is no Barack Obama on offer in 2015-2016.

She also has a relentless fundraising machine, a reliable and experienced party and campaign apparatus, union support, and a solid base who would vote for Godzilla over a Republican.

Thus, she has great institutional advantages that will go far in overcoming her severe personal deficiencies.

But her biggest asset is that you can’t beat somebody with nobody, and right now the Republicans are offering up national nobodies. Maybe a somebody will emerge, but I wouldn’t count on it.

All meaning that although Hillary is a flawed person, and a flawed candidate, she has many advantages. So, as much as it pains me to say so, GiGi’s wish may come true. And as bad as a Gensler Treasury would be, it pains me even more to say that it likely would be one of the best parts of a Hillary Clinton Administration.

Print Friendly

March 1, 2015

The Clayton Rule on Speed

Filed under: Commodities,Derivatives,Economics,Exchanges,HFT,Politics,Regulation — The Professor @ 1:12 pm

I have written often of the Clayton Rule of Manipulation, named after a cotton broker who, in testimony before Congress, uttered these wise words:

“The word ‘manipulation’ . . . in its use is so broad as to include any operation of the cotton market that does not suit the gentleman who is speaking at the moment.”

High Frequency Trading has created the possibility of the promiscuous application of the Clayton Rule, because there is a lot of things about HFT that do not suit a lot of gentlemen at this moment, and a lot of ladies for that matter. The CFTC’s Frankendodd-based Disruptive Practices Rule, plus the fraud based manipulation Rule 180.1 (also a product of Dodd-Frank) provide the agency’s enforcement staff with the tools to pursue a pretty much anything that does not suit them at any particular moment.

At present, the thing that least suits government enforcers-including not just CFTC but the Department of Justice as well-is spoofing. As I discussed late last year, the DOJ has filed criminal charges in a spoofing case.

Here’s my description of spoofing:

What is spoofing? It’s the futures market equivalent of Lucy and the football. A trader submits buy (sell) orders above (below) the inside market in the hope that this convinces other market participants that there is strong demand (supply) for (of) the futures contract. If others are so fooled, they will raise their bids (lower their offers). Right before they do this, the spoofer pulls his orders just like Lucy pulls the football away from Charlie Brown, and then hits (lifts) the higher (lower) bids (offers). If the pre-spoof prices are “right”, the post-spoof bids (offers) are too high (too low), which means the spoofer sells high and buys low.

Order cancellation is a crucial component of the spoofing strategy, and this has created widespread suspicion about the legitimacy of order cancellation generally. Whatever you think about spoofing, if such futures market rule enforcers (exchanges, the CFTC, or the dreaded DOJ) begin to believe that traders who cancel orders at a high rate are doing something nefarious, and begin applying the Clayton Rule to such traders, the potential for mischief-and far worse-is great.

Many legitimate strategies involve high rates of order cancellation. In particular, market making strategies, including market making strategies pursued by HFT firms, typically involve high cancellation rates, especially in markets with small ticks, narrow spreads, and high volatility. Market makers can quote tighter spreads if they can adjust their quotes rapidly in response to new information. High volatility essentially means a high rate of information flow, and a need to adjust quotes frequently. Moreover, HFT traders can condition their quotes in a given market based on information (e.g., trades or quote changes) in other markets. Thus, to be able to quote tight markets in these conditions, market makers need to be able to adjust quotes frequently, and this in turn requires frequent order cancellations.

Order cancellation is also a means of protecting market making HFTs from being picked off by traders with better information. HFTs attempt to identify when order flow becomes “toxic” (i.e., is characterized by a large proportion of better-informed traders) and rationally cancel orders when this occurs. This reduces the cost of making markets.

This creates a considerable tension if order cancellation rates are used as a metric to detect potential manipulative conduct. Tweaking strategies to reduce cancellation rates to reduce the probability of getting caught in an enforcement dragnet increases the frequency that a trader is picked off and thereby raises trading costs: the rational response is to quote less aggressively, which reduces market liquidity. But not doing so raises the risk of a torturous investigation, or worse.

What’s more, the complexity of HFT strategies will make ex post forensic analyses of traders’ activities fraught with potential error. There is likely to be a high rate of false positives-the identification of legitimate strategies as manipulative. This is particularly true for firms that trade intensively in multiple markets. With some frequency, such firms will quote one side of the market, cancel, and then take liquidity from the other side of the market (the pattern that is symptomatic of spoofing). They will do that because that can be the rational response to some patterns of information arrival. But try explaining that to a suspicious regulator.

The problem here inheres in large part in the inductive nature of legal reasoning, which generalizes from specific cases and relies heavily on analogy. With such reasoning there is always a danger that a necessary condition (“all spoofing strategies involve high rates of order cancellation”) morphs into a sufficient condition (“high rates of order cancellation indicate manipulation”). This danger is particularly acute in complex environments in which subtle differences in strategies that are difficult for laymen to grasp (and may even be difficult for the strategist or experts to explain) can lead to very different conclusions about their legitimacy.

The potential for a regulatory dragnet directed against spoofing catching legitimate strategies by mistake is probably the greatest near-term concern that traders should have, because such a dragnet is underway. But the widespread misunderstanding and suspicion of HFT more generally means that over the medium to long term, the scope of the Clayton Rule may expand dramatically.

This is particularly worrisome given that suspected offenders are at risk to criminal charges. This dramatic escalation in the stakes raises compliance costs because every inquiry, even from an exchange, demands a fully-lawyered response. Moreover, it will make firms avoid some perfectly rational strategies that reduce the costs of making markets, thereby reducing liquidity and inflating trading costs for everyone.

The vagueness of the statute and the regulations that derive from it pose a huge risk to HFT firms. The only saving grace is that this vagueness may result in the law being declared unconstitutional and preventing it from being used in criminal prosecutions.

Although he wrote in a non-official capacity, an article by CFTC attorney Gregory Scopino illustrates how expansive regulators may become in their criminalization of HFT strategies. In a Connecticut Law Review article, Scopino questions the legality of “high-speed ‘pinging’ and ‘front running’ in futures markets.” It’s frightening to watch him stretch the concepts of fraud and “deceptive contrivance or device” to cover a variety of defensible practices which he seems not to understand.

In particular, he is very exercised by “pinging”, that is, the submission of small orders in an attempt to detect large orders. As remarkable as it might sound, his understanding of this seems to be even more limited than Michael Lewis’s: see Peter Kovac’s demolition of Lewis in his Not so Fast.

When there is hidden liquidity (due to non-displayed orders or iceberg orders), it makes perfect sense for traders to attempt to learn about market depth. This can be valuable information for liquidity providers, who get to know about competitive conditions in the market and can gauge better the potential profitability of supply ing liquidity. It can also be valuable to informed strategic traders, whose optimal trading strategy depends on market depth (as Pete Kyle showed more than 30 years ago): see a nice paper by Clark-Joseph on such “exploratory trading”, which sadly has been misrepresented by many (including Lewis and Scopino) to mean that HFT firms front run, a conclusion that Clark-Joseph explicitly denies. To call either of these strategies front running, or deem them deceptive or fraudulent is disturbing, to say the least.

Scopino and other critics of HFT also criticize the alleged practice of order anticipation, whereby a trader infers the existence of a large order being executed in pieces as soon as the first pieces trade. I say alleged, because as Kovac points out, the noisiness of order flow sharply limits the ability to detect a large latent order on the basis of a few trades.

What’s more, as I wrote in some posts on HFT just about a year ago, and in a piece in the Journal of Applied Corporate Finance, it’s by no means clear that order anticipation is inefficient, due to the equivocal nature of informed trading. Informed trading reduces liquidity, making it particularly perverse that Scopino wants to treat order anticipation as a form of insider trading (i.e., trading on non-public information). Talk about getting things totally backwards: this would criminalize a type of trading that actually impedes liquidity-reducing informed trading. Maybe there’s a planet on which that makes sense, but its sky ain’t blue.

Fortunately, these are now just gleams in an ambitious attorney’s eye. But from such gleams often come regulatory progeny. Indeed, since there is a strong and vocal constituency to impede HFT, the political economy of regulation tends to favor such an outcome. Regulators gonna regulate, especially when importuned by interested parties. Look no further than the net neutrality debacle.

In sum, the Clayton Rule has been around for the good part of a century, but I fear we ain’t seen nothing yet. HFT doesn’t suit a lot of people, often because of ignorance or self-interest, and as Mr. Clayton observed so long ago, it’s a short step from that to an accusation of manipulation. Regulators armed with broad, vague, and elastic authority (and things don’t get much broader, vaguer, or more elastic than “deceptive contrivance or device”) pose a great danger of running amok and impairing market performance in the name of improving it.

Print Friendly

January 25, 2015

From Birth to Adulthood in a Few Short Years: HFT’s Predictable Convergence to Competitive Normalcy

Filed under: Commodities,Derivatives,Economics,Exchanges,HFT — The Professor @ 2:05 pm

Once upon a time, high frequency trading-HFT-was viewed to be a juggernaut, a money-making machine that would have Wall Street and LaSalle Street in its thrall. These dire predictions were based on the remarkable growth in HFT in 2009 and 2010 in particular, but the narrative outlived the heady growth.

In fact, HFT has followed the trajectory of any technological innovation in a highly competitive environment. At its inception, it was a dramatically innovative way of performing longstanding functions undertaken by intermediaries in financial markets: market making and arbitrage. It did so much more efficiently than incumbents did, and so rapidly it displaced the old-style intermediaries. During this transitional period, the first-movers earned supernormal profits because of cost and speed advantages over the old school intermediaries. HFT market share expanded dramatically, and the profits attracted expansion in the capital and capacity of the first-movers, and the entry of new firms. And as day follows night, this entry of new HFT capacity and the intensification of competition dissipated these profits. This is basic economics in action.

According to the Tabb Group, HFT profits declined from $7 billion in 2009 to only $1.3 billion today. Moreover, HFT market share in both has declined from its peak of 61 percent in equities in 2009 (to 48.4 percent today) and 64 percent in futures in 2011 (to 60 percent today). The profit decline and topping out of market share are both symptomatic of sector settling down into a steady state of normal competitive profits and growth commensurate with the increase in the size of the overall market in the aftermath of a technological shock. Fittingly, this convergence in the HFT sector has been notable for its rapidity, with the transition from birth to adulthood occurring within a mere handful of years.

A little perspective is in order too. Equity market volume in the US is on the order of $100 billion per day. HFT profits now represent on the order of 1/250th of one percent of equity turnover. Since HFT profits include profits from derivatives, their share of turnover of everything they trade overall is smaller still, meaning that although they trade a lot, their margins are razor thin. This is another sign of a highly competitive market.

We are now witnessing further evidence of the maturation of HFT. There is a pronounced trend to consolidation, with HFT pioneer Allston Trading exiting the market, and DRW purchasing Chopper Trading. Such consolidation is a normal phase in the evolution of a sector that has experienced a technological shock. Expect to see more departures and acquisitions as the industry (again predictably) turns its focus to cost containment as competition means that the days of easy money are fading in the rearview mirror.

It’s interesting in this context to think about Schumpeter’s argument in Capitalism, Socialism, and Democracy.  One motivation for the book was to examine whether there was, as Marx and earlier classical economists predicted, a tendency for profit to diminish to zero (where costs of capital are included in determining economic profit).  That may be true in a totally static setting, but as Schumpeter noted the development of new, disruptive technologies overturns these results.  The process of creative destruction can result in the introduction of a sequence of new technologies or products that displace the old, earn large profits for a while, but are then either displaced by new disruptive technologies, or see profits vanish due to classical/neoclassical competitive forces.

Whether it is by the entry of a new destructively creative technology, or the inexorable forces of entry and expansion in a technologically static setting, one expects profits earned by firms in one wave of creative destruction to decline.  That’s what we’re seeing in HFT.  It was definitely a disruptive technology that reaped substantial profits at the time of its introduction, but those profits are eroding.

That shouldn’t be a surprise.  But it no doubt is to many of those who have made apocalyptic predictions about the machines taking over the earth.  Or the markets, anyways.

Or, as Herb Stein famously said as a caution against extrapolating from current trends, “If something cannot go on forever, it will stop.” Those making dire predictions about HFT were largely extrapolating from the events of 2008-2010, and ignored the natural economic forces that constrain growth and dissipate profits. HFT is now a normal, competitive business earning normal, competitive profits.  And hopefully this reality will eventually sink in, and the hysteria surrounding HFT will fade away just as its profits did.

Print Friendly

July 21, 2014

Doing Due Diligence in the Dark

Filed under: Exchanges,HFT,Regulation — The Professor @ 8:39 pm

Scott Patterson, WSJ reporter and the author of Dark Pools, has a piece in today’s journal about the Barclays LX story. He finds, lo and behold, that several users of the pool had determined that they were getting poor executions:

Trading firms and employees raised concerns about high-speed traders at Barclays PLC’s dark pool months before the New York attorney general alleged in June that the firm lied to clients about the extent of predatory trading activity on the electronic trading venue, according to people familiar with the firms.

Some big trading outfits noticed their orders weren’t getting the best treatment on the dark pool, said people familiar with the trading. The firms began to grow concerned that the poor results resulted from high-frequency trading, the people said.

In response, at least two firms—RBC Capital Markets and T. Rowe Price Group Inc —boosted the minimum number of shares they would trade on the dark pool, letting them dodge high-speed traders, who often trade in small chunks of 100 or 200 shares, the people said.

This relates directly to a point that I made in my post on the Barclays story. Trading is an experience good. Dark pool customers can evaluate the quality of their executions. If a pool is not screening out opportunistic traders, execution costs will be high relative to other venues who do a better job of screening, and users who monitor their execution costs will detect this. Regardless of what a dark pool operator says about what it is doing, the proof of the pudding is in the trading, as it were.

The Patterson article shows that at least some buy side firms do the necessary analysis, and can detect a pool that does not exclude toxic flows.

This long FT piece relies extensively on quotes from Hisander Misra, one of the founders of Chi-X, to argue that many fund managers have been ignorant of the quality of executions they get on dark pools. The article talked to two anonymous fund managers who say they don’t know how dark pools work.

The stated implication here is that regulation is needed to protect the buy side from unscrupulous pool operators.

A couple of comments. First, not knowing how a pool works doesn’t really matter. Measures of execution quality are what matter, and these can be measured. I don’t know all of the technical details of the operation of my car or the computer I am using, but I can evaluate their performances, and that’s what matters.

Second, this is really a cost-benefit issue. Monitoring of performance is costly. But so is regulation and litigation. Given that market participants have the biggest stake in measuring pool performance properly, and can develop more sophisticated metrics, there are strong arguments in favor of relying on monitoring.  Regulators can, perhaps, see whether a dark pool does what it advertises it will do, but this is often irrelevant because it does not necessarily correspond closely to pool execution costs, which is what really matters.

Interestingly, one of the things that got a major dark pool (Liquidnet) in trouble was that it shared information about the identities of existing clients with prospective clients. This presents interesting issues. Sharing such information could economize on monitoring costs. If a a big firm (like a T. Rowe) trades in a pool, this can signal to other potential users that the pool does a good job of screening out the opportunistic. This allows them to free ride off the monitoring efforts of the big firm, which economizes on monitoring costs.

Another illustration of how things are never simple and straightforward when analyzing market structure.

One last point. Some of the commentary I’ve read recently uses the prevalence of HFT volume in a dark pool as a proxy for how much opportunistic trading goes on in the pool. This is a very dangerous shortcut, because as I (and others) have written repeatedly, there is all different kinds of HFT. Some adds to liquidity, some consumes it, and some may be outright toxic/predatory. Market-making HFT can enhance dark pool liquidity, which is probably why dark pools encourage HFT participation. Indeed, it is hard to understand how a pool could benefit from encouraging the participation of predatory HFT, especially if it lets such firms trade for free. This drives away the paying customers, particularly when the paying customers evaluate the quality of their executions.

Evaluating execution quality and cost could be considered a form of institutional trader due diligence. Firms that do so can protect themselves-and their investor-clients-from opportunistic counterparties. Even though the executions are done in the dark, it is possible to shine a light on the results. The WSJ piece shows that many firms do just that. The question of whether additional regulation is needed boils down to the question of whether the cost and efficacy of these self-help efforts is superior to that of regulation.

Print Friendly

July 15, 2014

Oil Futures Trading In Troubled Waters

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,HFT,Regulation — The Professor @ 7:16 pm

A recent working paper by Pradeep Yadav, Michel Robe and Vikas Raman tackles a very interesting issue: do electronic market makers (EMMs, typically HFT firms) supply liquidity differently than locals on the floor during its heyday? The paper has attracted a good deal of attention, including this article in Bloomberg.

The most important finding is that EMMs in crude oil futures do tend to reduce liquidity supply during high volatility/stressed periods, whereas crude futures floor locals did not. They explain this by invoking an argument I did 20 years ago in my research comparing the liquidity of floor-based LIFFE to the electronic DTB: the anonymity of electronic markets makes market makers there more vulnerable to adverse selection. From this, the authors conclude that an obligation to supply liquidity may be desirable.

These empirical conclusions seem supported by the data, although as I describe below the scant description of the methodology and some reservations based on my knowledge of the data make me somewhat circumspect in my evaluation.

But my biggest problem with the paper is that it seems to miss the forest for the trees. The really interesting question is whether electronic markets are more liquid than floor markets, and whether the relative liquidity in electronic and floor markets varies between stressed and non-stressed markets. The paper provides some intriguing results that speak to that question, but then the authors ignore it altogether.

Specifically, Table 1 has data on spreads in from the electronic NYMEX crude oil market in 2011, and from the floor NYMEX crude oil market in 2006. The mean and median spreads in the electronic market: .01 percent. Given a roughly $100 price, this corresponds to one tick ($.01) in the crude oil market. The mean and median spreads in the floor market: .35 percent and .25 percent, respectively.

Think about that for a minute. Conservatively, spreads were 25 times higher in the floor market. Even adjusting for the fact that prices in 2011 were almost double than in 2006, we’re talking a 12-fold difference in absolute (rather than percentage) spreads. That is just huge.

So even if EMMs are more likely to run away during stressed market conditions, the electronic market wins hands down in the liquidity race on average. Hell, it’s not even a race. Indeed, the difference is so large I have a hard time believing it, which raises questions about the data and methodologies.

This raises another issue with the paper. The paper compares at the liquidity supply mechanism in electronic and floor markets. Specifically, it examines the behavior of market makers in the two different types of markets. What we are really interested is the outcome of these mechanisms. Therefore, given the rich data set, the authors should compare measures of liquidity in stressed and non-stressed periods, and make comparisons between the electronic and floor markets. What’s more, they should examine a variety of different liquidity measures. There are multiple measures of spreads, some of which specifically measure adverse selection costs. It would be very illuminating to see those measures across trading mechanisms and market environments. Moreover, depth and price impact are also relevant. Let’s see those comparisons too.

It is quite possible that the ratio of liquidity measures in good and bad times is worse in electronic trading than on the floor, but in any given environment, the electronic market is more liquid. That’s what we really want to know about, but the paper is utterly silent on this. I find that puzzling and rather aggravating, actually.

Insofar as the policy recommendation is concerned, as I’ve been writing since at least 2010, the fact that market makers withdraw supply during periods of market stress does not necessarily imply that imposing obligations to make markets even during stressed periods is efficiency enhancing. Such obligations force market makers to incur losses when the constraints bind. Since entry into market making is relatively free, and the market is likely to be competitive (the paper states that there are 52 active EMMS in the sample), raising costs in some state of the world, and reducing returns to market making in these states, will lead to the exit of market making capacity. This will reduce liquidity during unstressed periods, and could even lead to less liquidity supply in stressed periods: fewer firms offering more liquidity than they would otherwise choose due to an obligation may supply less liquidity in aggregate than a larger number of firms that can each reduce liquidity supply during stressed periods (because they are not obligated to supply a minimum amount of liquidity).

In other words, there is no free lunch. Even assuming that EMMs are more likely to reduce supply during stressed periods than locals, it does not follow that a market making obligation is desirable in electronic environments. The putatively higher cost of supplying liquidity in an electronic environment is a feature of that environment. Requiring EMMs to bear that cost means that they have to recoup it at other times. Higher cost is higher cost, and the piper must be paid. The finding of the paper may be necessary to justify a market maker obligation, but it is clearly not sufficient.

There are some other issues that the authors really need to address. The descriptions of the methodologies in the paper are far too scanty. I don’t believe that I could replicate their analysis based on the description in the paper. As an example, they say “Bid-Ask Spreads are calculated as in the prior literature.” Well, there are many papers, and many ways of calculating spreads. Hell, there are multiple measures of spreads. A more detailed statement of the actual calculation is required in order to know exactly what was done, and to replicate it or to explore alternatives.

Comparisons between electronic and open outcry markets are challenging because the nature of the data are very different. We can observe the order book at every instant of time in an electronic market. We can also sequence everything-quotes, cancellations and trades-with exactitude. (In futures markets, anyways. Due to the lack of clock synchronization across trading venues, this is a problem in a fragmented market like US equities.) These factors mean that it is possible to see whether EMMs take liquidity or supply it: since we can observe the quote, we know that if an EMM sells (buys) at the offer (bid) it is supplying liquidity, but if it buys (sells) at the offer (bid) it is consuming liquidity.

Things are not nearly so neat in floor trading data. I have worked quite a bit with exchange Street Books. They convey much less information than the order book and the record of executed trades in electronic markets like Globex. Street Books do not report the prevailing bids and offers, so I don’t see how it is possible to determine definitively whether a local is supplying or consuming liquidity in a particular trade. The mere fact that a local (CTI1) is trading with a customer (CTI4) does not mean the local is supplying liquidity: he could be hitting the bid/lifting the offer of a customer limit order, but since we can’t see order type, we don’t know. Moreover, even to the extent that there are some bids and offers in the time and sales record, they tend to be incomplete (especially during fast markets) and time sequencing is highly problematic. I just don’t see how it is possible to do an apples-to-apples comparison of liquidity supply (and particularly the passivity/aggressiveness of market makers) between floor and electronic markets just due to the differences in data. Nonetheless, the paper purports to do that. Another reason to see more detailed descriptions of methodology and data.

One red flag that indicates that the floor data may have some problems. The reported maximum bid-ask spread in the floor sample is 26.48 percent!!! 26.48 percent? Really? The 75th percentile spread is .47 percent. Given a $60 price, that’s almost 30 ticks. Color me skeptical. Another reason why a much more detailed description of methodologies is essential.

Another technical issue is endogeneity. Liquidity affects volatility, but the paper uses volatility as one of its measures of stressed markets in its study of how stress affects liquidity. This creates an endogeneity (circularity, if you will) problem. It would be preferable to use some instrument for stressed market conditions. Instruments are always hard to come up with, and I don’t have one off the top of my head, but Yanev et al should give some serious thought to identifying/creating such an instrument.

Moreover, the main claim of the paper is that EMMs’ liquidity supply is more sensitive to the toxicity of order flow than locals’ liquidity supply. The authors use order imbalance (CTI4 buys minus CTI4 sells, or the absolute value thereof more precisely), which is one measure of toxicity, but there are others. I would prefer a measure of customer (CTI4) alpha. Toxic (i.e., informed) order flow predicts future price movements, and hence when customer orders realize high alphas, it is likely that customers are more informed than usual and earn positive alphas. It would therefore be interesting to see the sensitivities of liquidity supply in the different trading environments to order flow toxicity as measured by CTI4 alphas.

I will note yet again that market maker actions to cut liquidity supply when adverse selection problems are severe is not necessarily a bad thing. Informed trading can be a form of rent seeking, and if EMMs are better able to detect informed trading and withdraw liquidity when informed trading is rampant, this form of rent seeking may be mitigated. Thus, greater sensitivity to toxicity could be a feature, not a bug.

All that said, I consider this paper a laudable effort that asks serious questions, and attempts to answer them in a rigorous way. The results are interesting and plausible, but the sketchy descriptions of the methodologies gives me reservations about these results. But by far the biggest issue is that of the forest and trees. What is really interesting is whether electronic markets are more or less liquid in different market environments than floor markets. Even if liquidity supply is flightier in electronic markets, they can still outperform floor based markets in both unstressed and stressed environments. The huge disparity in spreads reported in the paper suggests a vast difference in liquidity on average, which suggests a vast difference in liquidity in all different market environments, stressed and unstressed. What we really care about is liquidity outcomes, as measured by spreads, depth, price impact, etc. This is the really interesting issue, but one that the paper does not explore.

But that’s the beauty of academic research, right? Milking the same data for multiple papers. So I suggest that Pradeep, Michel and Vikas keep sitting on that milking stool and keep squeezing that . . . data 😉 Or provide the data to the rest of us out their and let us give it a tug.

Print Friendly

July 11, 2014

25 Years Ago Today Ferruzzi Created the Streetwise Professor

Filed under: Clearing,Commodities,Derivatives,Economics,Exchanges,HFT,History,Regulation — The Professor @ 9:03 am

Today is the 25th anniversary of the most important event in my professional life. On 11 July, 1989, the Chicago Board of Trade issued an Emergency Order requiring all firms with positions in July 1989 soybean futures in excess of the speculative limit to reduce those positions to the limit over five business days in a pro rata fashion (i.e., 20 percent per day, or faster). Only one firm was impacted by the order, Italian conglomerate Ferruzzi, SA.

Ferruzzi was in the midst of an attempt to corner the market, as it had done in May, 1989. The EO resulted in a sharp drop in soybean futures prices and a jump in the basis: for instance, by the time the contract went off the board on 20 July, the basis at NOLA had gone from zero to about 50 cents, by far the largest jump in that relationship in the historical record.

The EO set off a flurry of legal action. Ferruzzi tried to obtain an injunction against the CBT. Subsequently, farmers (some of whom had dumped truckloads of beans at the door of the CBT) sued the exchange. Moreover, a class action against Ferruzzi was also filed. These cases took years to wend their ways through the legal system. The farmer litigation (in the form of Sanner v. CBT) wasn’t decided (in favor of the CBT) until the fall of 2002. The case against Ferruzzi lasted somewhat less time, but still didn’t settle until 2006.

I was involved as an expert in both cases. Why?

Well, pretty much everything in my professional career post-1990 is connected to the Ferruzzi corner and CBT EO, in a knee-bone-connected-to-the-thigh-bone kind of way.

The CBT took a lot of heat for the EO. My senior colleague, the late Roger Kormendi, convinced the exchange to fund an independent analysis of its grain and oilseed markets to attempt to identify changes that could prevent a recurrence of the episode. Roger came into my office at Michigan, and told me about the funding. Knowing that I had worked in the futures markets before, asked me to participate in the study. I said that I had only worked in financial futures, but I could learn about commodities, so I signed on: it sounded interesting, my current research was at something of a standstill, and I am always up for learning something new. I ended up doing about 90 percent of the work and getting 20 percent of the money 😛 but it was well worth it, because of the dividends it paid in the subsequent quarter century. (Putting it that way makes me feel old. But this all happened when I was a small child. Really!)

The report I (mainly) wrote for the CBT turned into a book, Grain Futures Contracts: An Economic Appraisal. (Available on Amazon! Cheap! Buy two! I see exactly $0.00 of your generous purchases.) Moreover, I saw the connection between manipulation and industrial organization economics (which was my specialization in grad school): market power is a key concept in both. So I wrote several papers on market power manipulation, which turned into a book . (Also available on Amazon! And on Kindle: for some strange reason, it was one of the first books published on Kindle.)

The issue of manipulation led me to try to understand how it could best be prevented or deterred. This led me to research self-regulation, because self-regulation was often advanced as the best way to tackle manipulation. This research (and the anthropological field work I did working on the CBT study) made me aware that exchange governance played a crucial role, and that exchange  governance was intimately related to the fact that exchanges are non-profit firms. So of course I had to understand why exchanges were non-profits (which seemed weird given that those who trade on them are about as profit-driven as you can get), and why they were governed in the byzantine, committee-dominated way they were. Moreover, many advocates of self-regulation argued that competition forced exchanges to adopt efficient rules. Observing that exchanges in fact tended to be monopolies, I decided I needed to understand the economics of competition between execution venues in exchange markets. This caused me to write my papers on market macrostructure, which is still an active area of investigation: I am writing a book on that subject. This in turn produced many of the conclusions that I have drawn about HFT, RegNMS, etc.

Moreover, given that I concluded that self-regulation was in fact a poor way to address manipulation (because I found exchanges had poor incentives to do so), I examined whether government regulation or private legal action could do better. This resulted in my work on the efficiency of ex post deterrence of manipulation. My conclusions about the efficiency of ex post deterrence rested on my findings that manipulated prices could be distinguished reliably from competitive prices. This required me to understand the determinants of competitive prices, which led to my research on the dynamics of storable commodity prices that culminated in my 2011 book. (Now available in paperback on Amazon! Kindle too.)

In other words, pretty much everything in my CV traces back to Ferruzzi. Even the clearing-related research, which also has roots in the 1987 Crash, is due to Ferruzzi: I wouldn’t have been researching any derivatives-related topics otherwise.

My consulting work, and in particular my expert witness work, stems from Ferruzzi. The lead counsel in the class action against Ferruzzi came across Grain Futures Contracts in the CBT bookstore (yes, they had such a thing back in the day), and thought that I could help him as an expert. After some hesitation (attorneys being very risk averse, and hence reluctant to hire someone without testimonial experience) he hired me. The testimony went well, and that was the launching pad for my expert work.

I also did work helping to redesign the corn and soybean contracts at the CBT, and the canola contract in Winnipeg: these redesigned contracts (based on shipping receipts) are the ones traded today. Again, this work traces its lineage to Ferruzzi.

Hell, this was even my introduction to the conspiratorial craziness that often swirls around commodity markets. Check out this wild piece, which links Ferruzzi (“the Pope’s soybean company”) to Marc Rich, the Bushes, Hillary Clinton, Vince Foster, and several federal judges. You cannot make up this stuff. Well, you can, I guess, as a quick read will soon convince you.

I have other, even stranger connections to Hillary and Vince Foster which in a more indirect way also traces its way back to Ferruzzi. But that’s a story for another day.

There’s even a Russian connection. One of Ferruzzi’s BS cover stories for amassing a huge position was that it needed the beans to supply big export sales to the USSR. These sales were in fact fictitious.

Ferruzzi was a rather outlandish company that eventually collapsed in 1994. Like many Italian companies, it was leveraged out the wazoo. Moreover, it had become enmeshed in the Italian corruption/mob investigations of the early 1990s, and its chairman Raul Gardini, committed suicide in the midst of the scandal.

The traders who carried out the corners were located in stylish Paris, but they were real commodity cowboys of the old school. Learning about that was educational too.

To put things in a nutshell. Some crazy Italians, and English and American traders who worked for them, get the credit-or the blame-for creating the Streetwise Professor. Without them, God only knows what the hell I would have done for the last 25 years. But because of them, I raced down the rabbit hole of commodity markets. And man, have I seen some strange and interesting things on that trip. Hopefully I will see some more, and if I do, I’ll share them with you right here.

Print Friendly

June 25, 2014

The 40th Anniversary of Jaws, Barclays Edition: Did the LX Dark Pool Keep Out the Sharks or Invite Them In?

Filed under: Economics,Exchanges,HFT,Politics,Regulation — The Professor @ 8:33 pm

Today’s big news is the suit filed by NY Attorney General Eric Schneiderman alleging that Barclays defrauded the customers of its LX dark pool.

In the current hothouse environment of US equity market structure, this will inevitably unleash a torrent of criticism of dark pools. When evaluating the ensuing rhetoric, it is important to distinguish between criticism of dark pools generally, and this one dark pool in particular. That is, there are two distinct questions that are likely to be all tangled up. Are dark pools bad? Or, are dark pools good (or at least not bad), but did Barclays  not do what dark pools are supposed to do while claiming that it did?

What dark pools are supposed to do is protect traders (mainly institutional traders who can be considered uninformed) from predatory traders. Predatory traders can be those with better information, or those with a speed advantage (which often confers an information advantage, through arbitrage or order anticipation). Whether dark pools in general are good or bad depends on the effects of the segmentation of the market. By “cream skimming” the (relatively) uninformed order flow, dark pools make the exchanges less liquid. Order flow on the exchanges tends to be more “toxic” (i.e., informed), and these information asymmetries widen spreads and reduce depth, which raises trading costs for the uninformed traders who cannot avail themselves of the dark pool and who trade on the lit market instead. This means that the trading costs of some uninformed traders (those who can use the dark pools) goes down and the trading costs of some uninformed traders (those who can’t use dark pools) goes up. The distributive effect is one thing that makes dark pools controversial: the losers don’t like them. The net effect is impossible to determine in general, and depends on the competitiveness of the exchange market among other things: even if dark pools reduce liquidity on the exchange, they can provide a source of competition that generates benefits if the exchange markets are imperfectly competitive.

What’s more, dark pools reduce the returns to informed trading.  The efficiency effects of this are also ambiguous, because some informed trading enhances efficiency (by improving the informativeness of prices, and thereby leading to better investment decisions), but other informed trading is rent seeking.

In other words, it’s complicated. There is no “yes” or “no” answer to the first question. This is precisely why market structure debates are so intense and enduring.

The second question is what is at issue in the Barclays case. The NYAG alleges that Barclays promised to protect its customers from predatory HFT sharks, but failed to do so. Indeed, according to the complaint, Barclays actively tried to attract sharks to its pool. (This is one of the problematic aspects of the complaint, as I will show). So, the complaint really doesn’t take a view on whether dark pools that indeed protect customers from sharks are good or bad. It just claims that if dark pools claim to provide shark repellent, but don’t, they have defrauded their customers.

Barclays clearly did make bold claims that it was making strenuous efforts to protect its customers from predatory traders, including predatory HFT. This FAQ sets out its various anti-gaming procedures. In particular, LX performed “Liquidity Profiling” that evaluated the users of the dark pool on various dimensions. One dimension was aggressiveness: did they make quotes or execute against them? Another dimension was profitability. Traders that earn consistent profits over one second intervals are more likely to be informed, and costly for others without information to trade with. Based on this information, Barclays ranked traders on a 0 to 5 scale, with 0 being profitable, aggressive, predatory sharks, and 5 representing passive, gentle blue whales.

Furthermore, Barclays claimed that it allowed its customers to limit their trading to counterparties with certain liquidity profiles, and to certain types of counterparties. For instance, a user could choose not to be matched with a trader with an aggressive profile. Similarly, a customer could choose not to trade against an electronic liquidity provider. In addition, Barclays said that it would exclude traders who consistently brought toxic order flow to the market. That is, Barclays claimed that it was constantly on alert for sharks, and kept the sharks away from the minnows and dolphins and gentle whales.

The NYAG alleges this was a tissue of lies. There are several allegations.

The first is that in its marketing materials, Barclays misrepresented the composition of the order flow in the pool. Specifically,  a graph that  depicted Barclays’ “Liquidity Landscape” purported to show that very little of the trading in the pool was aggressive/predatory. The NYAG alleges that this chart is “false” because it did not include “one of the largest and most toxic participants  [Tradebot] in Barclays’ dark pool.” Further, the NYAG alleges that Barclays deceptively under-reported the amount of predatory HFT trading activity in the pool.

The second basic allegation is that Barclays did not exclude the sharks, and that by failing to update trader profiles, the ability to avoid trading with a firm with a 0 or 1 liquidity profile ranking was useless. Some firms that should have been labeled 0’s were labeled 4’s or 5’s, leaving those that tried to limit their counterparties to the 4’s or 5’s vulnerable to being preyed on by the 0’s. Further, the AG alleges that Barclays promised to exclude the 0’s, but didn’t.

(The complaint also makes allegations about Barclays order routing procedures for its customers, but that’s something of a separate issue, so I won’t discuss that here).

Fraud and misrepresentation are objectionable, and should be punished for purposes of deterrence. They are objectionable because they result in the production of goods and services that are worth less than the cost of producing them. Thus, if Barclays did engage in fraud and misrepresentation, punishment is in order.

One should always be cautious about making judgments on guilt based on a complaint, which by definition is a one-sided representation of the facts. This is particularly true where the complaint relies on selective quotes from emails, and the statements of ex-employees. This is why we have an adversarial process to determine guilt, to permit a thorough vetting of the evidence presented by the plaintiff, and to allow the defendant to present exculpatory evidence (including contextualizing the emails, presenting material that contradicts what is in the proffered emails, and evidence about the motives and reliability of the ex-employees).

Given all this, based on the complaint there is a colorable case, but not a slam dunk.

There is also the question of whether the alleged misrepresentations had a material impact on investors’ decisions regarding whether to trade on LX or not: any fraud would have led to a social harm only to the extent too many investors used LX, or traded too much on it. Here there is reason to doubt whether the misrepresentations mattered all that much.

Trading is an “experience good.” That is, one gets information about the quality of the good by consuming it. Someone may be induced to consume a shoddy good once by deceptive marketing, but if consuming it reveals that it is shoddy, the customer won’t be back. If the product is viable only if it gets repeat customers, deception and fraud are typically unviable strategies. You might convince me to try manure on a cone by telling me it’s ice cream, but once I’ve tried it, I won’t buy it again. If your business profits only if it gets repeat customers, this strategy won’t succeed.

Execution services provided by a dark pool are an experience good that relies on repeat purchases. The dark pool provides an experience good because it is intended to reduce execution costs, and market participants can evaluate/quantify these costs, either by themselves, or by employing consultants that specialize in estimating these costs. Moreover, most traders who trade on dark pools don’t trade on a single pool. They trade on several (and on lit venues too) and can compare execution costs on various venues. If Barclays had indeed failed to protect its customers against the sharks, those customers would have figured that out when they evaluated their executions on LX and found out that their execution costs were high compared to their expectations, and to other venues.  Moreover, dark pool customers trade day after day after day. A dark pool generates succeeds by reducing execution costs, and if it doesn’t it won’t generate persistently large and growing volumes.

Barclays LX generated large and growing volumes. It became the second largest dark pool. I am skeptical that it could have done so had it really been a sham that promised superior execution by protecting customers from sharks when in fact it was doing nothing to keep them out. This suggests that the material effect of the fraud might have been small even had it occurred. This is germane for determining the damages arising from the fraud.

It should also be noted that the complaint alleges that not only did Barclays not do what it promised to keep sharks out, it actively recruited sharks. This theory is highly problematic. According to the complaint, Barclays attracted predatory HFT firms by allowing them to trade essentially for free.

But how does that work, exactly? Yes, the HFT firms generate a lot of volume, but a price of zero times a volume of a zillion generates revenues of zero. You don’t make any money that way. What’s more, the presence of these sharks would have raised the trading costs of the fee-paying minnows, dolphins, and whales, who would have had every incentive to find safer waters, thereby depriving Barclays of any revenues from them. Thus, I am highly skeptical that the AG’s story regarding Barclays’ strategy makes any economic sense. It requires that the non-HFT paying customers must have been enormously stupid, and unaware that they were being served up as bait. Indeed, that they were so stupid that they paid for the privilege of being bait.

It would make sense for Barclays to offer inducements to HFT firms that supply liquidity, because that would reduce the trading costs of the other customers, attracting their volume and making them willing to pay higher fees to trade in the pool.

All we have to go on now is the complaint, and some basic economics. Based on this information, my initial conclusion is that it is plausible that Barclays did misrepresent/overstate the advantages of LX, but that this resulted in modest harm to investors, and that even if the customers of LX got less than they had expected, they did better than they would have trading on another venue.

But this is just an initial impression. The adversarial process generates information that (hopefully) allows more discriminating and precise judgments. I would focus on three types of evidence. First, a forensic evaluation of the LX trading system: did the Liquidity Profile mechanism really allow users to limit their exposure to toxic/predatory order flow? Second, an appraisal of the operation of the system: did it accurately categorize traders, or did Barclays, as alleged in the complaint, systematically mis-categorize predatory traders as benign, thereby exposing traders who wanted to avoid the sharks to their tender mercies? Third, a quantification of the performance of the system in delivering lower execution costs. If LX was indeed doing what a dark pool should do, users should have paid lower execution costs than they would have on other venues. If LX was in fact a massive fraud that attracted customers with promises of protection from predatory traders, but then set the sharks on them, these customers would have in fact incurred higher execution costs than they could have obtained on other venues. At root, the AG alleges that LX promised to lower execution costs, but failed to do so because it did not protect customers from predatory traders: the proof of that pudding is in the eating.

The adversarial judicial process makes it likely that such evidence will be produced, and evaluated by the trier of fact. The process is costly, and often messy, but given the stakes I am sure that these analyses will be performed and that justice will be done, if perhaps roughly.

My bigger concern is  in the adversarial political process. Particularly in the aftermath of Flash Boys, all equity market structure market issues are extremely contentious. Dark pools are a particularly fraught issue. The exchanges (NYSE/ICE and NASDAQ) resent the loss of order flow to dark pools, and want to kneecap them. Many in Congress are sympathetic to their pleas. As I noted at the outset, although the efficiency effects of dark pools are uncertain, their distributive effects are not: dark pools create winners (those who can trade on them, mainly) and losers (those who can’t trade on them, and rent seeking informed traders who lose the opportunity to exploit those who trade on dark pools). Distributive issues are inherently political, and given the sums at stake these political battles are well-funded.

There is thus the potential that the specifics of the Barclays case are interpreted to tar dark pools generally, resulting in a legislative and regulatory over-reaction that kills the good dark pools as well as the bad ones. The facts that AGs are by nature grand-standers generally, and that Schneiderman in particular is a crusader on the make, make such an outcome even more likely.

Given this, I will endeavor to provide an economics-based, balanced analysis of developments going forward. As I have written so often, equity market issues are seldom black and white. Given the nature of equity trading, specifically the central role played by information in it, it is hard to analyze the efficiency effects of various structures and policies. We are in a second best world, and comparisons are complex and messy in that world. In such a world, it is quite possible that both Barclays and the AG are wrong. We’ll see, and I’ll call it as I see it.

Print Friendly

Next Page »

Powered by WordPress