Streetwise Professor

November 4, 2014

This Cuts No ICE With Me

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — The Professor @ 9:24 pm

I admire  Jeffrey Sprecher. ICE has been an amazing success story, and a lot of that has to do with his rather unique combination of vision and ability to execute.

But he is not above talking his book, and he delivered some self-serving, and in fact anti-competition, remarks in ICE’s earnings call held earlier today:

The head of Intercontinental Exchange, the world’s second-largest exchange group by market value, launched an unusually explicit criticism of its bigger competitor’s business strategy as he touted growth in his flagship oil contract.

Commercial customers such as refineries and airlines propelled this growth “as we see our competitors adopting incentives that attract the type of algorithm . . . trading that typically drives commercial users away,” Mr Sprecher said.

Mr Sprecher said “payment for order flow schemes” such as CME’s expanded the market by “attracting traders that really don’t want to hold the risk of your products but just want . . . to get paid to be there.”

If Mr. Sprecher actually believes that, he should be glad that CME (to speak of competitors plural is rather amusing) is implementing a program that per his telling drives away the paying customers, the commercial users.  That’s doubly true since those commercial users would presumably go to ICE. How is CME supposed to make money giving away trading incentives to traders whose presence those who repel those who pay full fare? If that’s what CME is doing, Mr. Sprecher should remember the old adage about not intervening when your enemy is intent on committing a blunder.

Sprecher was touting the fact that ICE’s Brent contract had now surpassed the older CME WTI contract in open interest. Well, this is good for ICE, certainly, but Sprecher and his exchange really have had very little to do with it: this is further proof that it’s usually better to be lucky, than good. Brent’s relative rise is the result of structural factors, most notably the prolonged logistical bottleneck that isolated WTI from waterborne crudes: that bottleneck is largely gone, replaced instead by a regulatory bottleneck, the US export ban.

ICE should not gloat for too long, though, because it is quite likely that the export ban will go, one way or another. What’s more, the resource base supporting the Brent contract is dwindling, and rapidly, whereas the Midcontinent of the US is experiencing a crisis of abundance, if it is experiencing a crisis at all. Logistical bottlenecks created by such crises tend to be transitory, and even regulatory bottlenecks can be overcome. In a few years, WTI will be deeply connected with the waterborne market, albeit in a non-traditional direction. And Brent will be at the mercy of inexorably declining production, and the ability of Platts and an often fractious community of producers and traders to figure out a contractual fix. (Adding Urals to the Brent basket? Really?) So Brent is riding high now, but over the medium to long term, CME will be one breaking out the shades, because WTI will have the brighter future.

As for incentives offered by upstart markets to unseat incumbents, as CME is attempting to do to ICE in Brent, this is a classic competitive tactic, and almost necessary in futures markets. The network effect of order flow means that (as I say in Gregory Meyer’s FT piece) bigger incumbent contracts have a big competitive advantage. The only way that  a competing contract can possibly build order flow and liquidity is to offer incentives, both to market makers (including HFT and algo traders!) who supply liquidity and to the hedgers and speculators that consume liquidity. (I wrote about this last year. Amusingly, I had forgotten about that post until Greg reminded me of it:-P)

Even that is a dicey proposition. Many have tried, and most have failed. But sometimes the upstart the succeeds, and at other times has forced the incumbent to meet the incentives to keep market share, and that can be expensive for the incumbent. That’s probably what Sprecher really doesn’t like. It’s not that incentives don’t work (as the criticism quoted above suggests): it’s that they just might. And if CME’s incentives work it could be an costly proposition for ICE to respond in kind.

In other words, Sprecher is really criticizing a reasonable competitive tactic, because like any dominant incumbent, he doesn’t like competition. That’s his job, but that kind of criticism cuts no ice with me. Or ICE, either, as much as I admire its achievement.

 

 

Print Friendly

October 28, 2014

Convergence to Agreement With Matt Levine

Filed under: Commodities,Derivatives,Economics,Exchanges,Regulation — The Professor @ 10:10 am

Matt Levine graciously led his daily linkwrap with a response to my post on his copper column:

It’s not that hard to manipulate copper.

Craig Pirrong, who knows a lot more about commodities markets than I do [aw, shucks], objects to my take on copper. My view is sort of efficient-markets-y: If one person buys up all the copper in the LME warehouses and then tries to raise the price, the much much greater supply of copper that’s not in those warehouses will flow into the warehouses and limit his ability to do that. And I still think that’s broadly true, but broadly true may not be the point. Pirrong quite rightly points out that there’s lots of friction along the way, and the frictions may matter more than the limits in actual fact.

. . . .

So there are limits to cornering, but they may not be binding on an actual economic actor: You can’t push prices up very much, or forvery long, but you may be able to push them up high enough and for long enough to make yourself a lot of money.

I agree fully there are limits to cornering. The supply curve isn’t completely inelastic. People can divert supplies (at some cost) into deliverable position. The cornerer presents the shorts with the choice: pay me to get out of your positions, or incur the cost of making delivery. Since those delivery costs are finite, the amount the cornerer can extract is limited too.

I agree as well that corners typically elevate prices temporarily: after all, the manipulator needs to liquidate his positions in order to cash out, and as soon as that happens price relationships snap back. But that temporary period can last for some time. Weeks, sometimes more.

What’s more, when the temporary price distortions happen matters a lot. Some squeezes occur at the very end of a contract. This is what happened in Indiana Farm Bureau in 1973. A more recent example is the expiry of the October, 2008 crude oil contract, in which prices spiked hugely in the last few minutes of trading.

The economic harm of these last minute squeezes isn’t that large. There are few players in the market, most hedgers have rolled or offset, and the time frame of the price distortion is too short to cause inefficient movements of the commodity.

But other corners are more protracted, and occur at precisely the wrong time.

Specifically, some corners start to distort prices well before expiration, and precisely when hedgers are looking to roll or offset. Short, out-of-position hedgers looking to roll or offset try to buy either spreads or outrights. The large long planning to corner the market doesn’t liquidate. So the hedgers bid up the expiring contract. Long still doesn’t budge. So the shorts bid it up some more. Eventually, the large long relents and sells when prices and spreads get substantially out of line, and the hedgers exit their positions but at a painfully artificial price. I have documented price distortions in some episodes of 10 percent or more. That’s a big deal, especially when one considers the very thin margins on which commodity trading is done. Combine that price distortion with the fact that a large number of shorts pay that distorted price to get out of their positions, and the dollar damages can be large. Depending on the size of the contract, and the magnitude of the distortion, nine or ten figures large.  (I analyze the liquidation/roll process theoretically in a paper titled “Squeeze Play” that appeared in the Journal of Alternative Investments a few years ago.)

But this is all paper trading, right, so real reapers of wheat and miners of copper aren’t damaged, right? Well, for the bigger, more protracted squeezes that’s not right.

Most hedgers are “out-of-position” they are using a futures contract to hedge something that isn’t deliverable. For example, shippers of Brazilian beans or holders of soybean inventories in Iowa use CBT soybean futures as a hedge. They are therefore long the basis. Corners distort the basis: the futures price rises to reflect the frictions and bottlenecks and technical features of the delivery mechanism, but the prices of the vastly larger quantities of the physical traded and held elsewhere may rise little, if at all. So the out-of-position hedgers don’t gain on their inventories, but they pay an inflated price to exit their futures.

This is why corners are a bad thing. They undermine  the most vital function of futures markets: hedging/risk transfer. Hedgers pay the biggest price for corners precisely because the delivery market is only a small sliver of the world market for a commodity, and because the network effects of liquidity cause all hedging activity to tip to a single market (with a very few exceptions). Thus, the very inside baseball details of the delivery process in a specific, localized market have global consequences. That’s why temporary and not very big and localized are not much comfort when it comes to the price distortions associated with market power manipulations.

 

Print Friendly

October 27, 2014

Matt Levine Passes Off a Bad Penny

Filed under: Commodities,Derivatives,Economics,Exchanges,Regulation — The Professor @ 5:36 pm

Bloomberg’s Matt Levine is usually very insightful about markets, and about financial skullduggery. Alas, in his article on developments in the copper market, Matt is passing off a bad penny.

The basic facts are these. A single firm, reportedly well-known (and arguably infamous) metals trading fund Red Kite, has accumulated upwards of 50 percent (and at times as much as 90 percent) of copper in LME warehouses that is deliverable against LME futures contracts. Such an accumulation can facilitate a corner of the market, or could be a symptom of a corner: a large long takes delivery of virtually the entire deliverable stock (and perhaps all of it) to execute a corner. So the developments in LME copper bear the hallmarks of a squeeze, or an impending one.

What’s more, the price relationships in the market are consistent with a squeeze: the market is in backwardation. I have not had time to determine whether the backwardation is large, controlling for stocks (as would occur during a corner), but the sharp spike in backwardation in recent days is symptomatic of a corner, or fears of a corner.

Put simply, there is smoke here. But Matt Levine seems intent on denying that. Weirdly, he focuses on the allegations involving Goldman’s actions in aluminum:

Loosely speaking, the problem of aluminum was that it was in deep contango: Prices for immediate delivery were low, prices for future delivery were high, and so buying aluminum and chucking it in a warehouse to deliver later was profitable. So people did, and the warehouses got pretty jammed up, and other people who wanted aluminum for immediate use found it all a bit unsporting.

. . . .

The LME warehouse system is an interesting abstract representation of a commodity market, but you can get into trouble if you confuse it with the actual commodity market. One example of the trouble: Goldman and its cronies were accused of manipulating aluminum prices up by putting too much aluminum in LME warehouses.

Well, yes. But the point is that there are many different kinds of manipulation. Many, many different kinds. An Appeals Court in the US opined in the Cargill case that they number of ways of manipulating was limited only by the imagination of man. Too true. The facts in aluminum and the facts in copper are totally different, and the alleged forms of manipulation are totally different, so the events in aluminum are a red herring (although it is copper that is the red metal)

Levine also makes a big, big deal out of the fact that the amount of copper in LME warehouses is trivial compared to the amount of copper produced in the world, let alone the amount of copper that remains in the earth’s crust. This matters hardly at all.

What matters is the steepness of the supply curve into warehouses. If that supply curve is upward sloping, a firm with a big enough futures position can corner the market, and distort prices, even if the amount of copper actually in the warehouses, or attracted to the warehouses by a cornerer’s artificial demand, is small relative to the size of the world copper market.

Case in point. In December 1995 Hamanaka/Sumitomo cornered the LME copper contract holding a position in LME warrants that was substantially smaller than what one firm now owns. Hamanaka’s/Sumitomo’s physical and futures positions were small relative to the size of the world copper market, measured by production and consumption. But they still had market power in the relevant market because it was uneconomic to attract additional copper into LME warehouses.

Another example. Ferruzzi cornered the CBT soybean contract in July, 1989, owning a mere 8 million bushels of beans in Chicago and Toledo. But since it was uneconomic to move additional supplies into those delivery points, it was profitable for, and possible for, Ferruzzi to corner the expiring contract.

World supply may have an effect on the slope of the supply curve into warehouses, but that slope can be positive (thereby creating the conditions necessary to corner) even if the share of metal in warehouses is small. The slope of the supply curve depends on the bottlenecks associated with getting metal into warehouses, and the costs of diverting metal that should go to consumers into warehouses. These bottlenecks and costs can be acute, even if the amount of warehoused metal is small. Diverting copper that should go to a fabricator or wire mill to an LME warehouse is inefficient, i.e., costly. It only happens, therefore, if the price is distorted sufficiently to offset this higher cost.

Levine ends his post thus:

One example of the trouble: Goldman and its cronies were accused of manipulating aluminum prices up by putting too much aluminum in LME warehouses.The worries about copper — that it could be cornered, pushing prices up — stem from there being too little copper in those warehouses. Both of those things can’t be true.

Yes they can, actually. Different commodities at different times with different fundamental conditions are vulnerable to different kinds of manipulation. It is perfectly possible for it to be true that aluminum was vulnerable to a manipulative scheme that exploited the bottlenecks of taking the white metal out of warehouses starting some years ago, and that copper is vulnerable to a manipulative scheme that exploits the bottlenecks of getting the red metal into warehouses now. No logical or factual contradiction whatsoever.

I know you are better than this, Matt. Don’t let your justifiable skepticism of allegations of manipulation make you a poster child for the Gresham’s Law of Internet Commentary.

 

Print Friendly

October 24, 2014

Someone Didn’t Get the Memo, and I Wouldn’t Want to be That Guy*

Filed under: Commodities,Derivatives,Economics,Exchanges — The Professor @ 9:23 am

Due to the five year gap in 30 year bond issuance, in mid-September the CME revised the deliverable basket for the June 2015 T-bond contract. It deleted the 6.25 of May, 2015 because its delivery value would have been so far below the values of the other bonds in the deliverable set. This would have made the contract more susceptible to a squeeze because only that bond would effectively be available for delivery due to the way the contract works.

The CME issued a memo on the subject.

Somebody obviously didn’t get it:

It looks like a Treasury futures trader failed to do his or her homework.

The price of 30-year Treasury futures expiring in June traded for less than 145 for about two hours yesterday before shooting up to more than 150. The 7.3 percent surge in their price yesterday, on the first day these particular contracts were traded, was unprecedented for 30-year Treasury futures, according to data compiled by Bloomberg. Volume amounted to 1,639 contracts with a notional value of $164 million.

What sets these futures apart from others is they’re the first ones where the U.S. government’s decision to stop issuing 30-year bonds from 2001 to 2006 must be accounted for when valuing the derivatives. The size and speed of yesterday’s jump indicates the initial traders of the contracts hadn’t factored in the unusual rules governing these particular products, said Craig Pirrong, a finance professor at the University of Houston.

“That is humongous,” said Pirrong, referring to the 7.3 percent jump. “We’re talking about a move you might see over weeks or a month occur in a day.” Pirrong said he suspected it was an algorithmic trader using an outdated model. “I would not want to be that guy,” the professor said.

Here’s a quick and dirty explanation. Multiple bonds are eligible for delivery. Since they have different coupons and maturities, their prices can differ substantially. For instance, the 3.5 percent of Feb 39 sells for about $110, and the 6.25 of 2030 sells for about $146. If both bonds were deliverable at par, no one would ever deliver the 6.25 of 2030, and it would not contribute in any real way to deliverable supply. Therefore, the CME effectively handicaps the delivery race by assigning conversion factors to each bond. The conversion factor is essentially the bond’s price on the delivery date assuming it yields 6 percent to maturity. If all bonds yield 6 percent, their delivery values would be equal, and the deliverable supply would be the total amount of deliverable bonds outstanding. This would make it almost impossible to squeeze the market.

Since bonds differ in duration, and actual yields differ from 6 percent, the conversion factors narrow but do not eliminate disparities in the delivery values of bonds. One bond will be cheapest-to-deliver. Roughly speaking, the CTD bond will be the one with the lowest ratio of price to conversion factor.

That’s where the problem comes in. For the June contract, if the 6.25 of 2030 was eligible to deliver, its converted price would be around $142. Due to the issuance/maturity gap, the converted prices of all the other bonds is substantially higher, ranging between $154 and $159.

This is due a duration effect. When yields are below 6 percent, and they are now way below, at less than 3 percent, low duration bonds become CTD: the prices of low duration bonds rise less (in percentage terms) for a given decline in yields than the prices of high duration bonds, so they become relatively cheaper. The 6.25 of 2030 has a substantially lower duration than the other bonds in the deliverable basket because of its lower maturity (more than 5 years) and higher coupon. So it would have been cheapest to deliver by a huge margin had CME allowed it to remain in the basket. This would have shrunk the deliverable supply to the amount outstanding of that bond, making a squeeze more likely, and more profitable. (And squeezes in Treasuries do occur. They were rife in the mid-to-late-80s, and there was a squeeze of the Ten Year in June of 2005. The 2005 squeeze, which was pretty gross, occurred when there was less than a $1 difference in delivery values between the CTD and the next-cheapest. The squeezer distorted prices by about 15/32s.)

The futures contract prices the CTD bond. So if someone-or someone’s algo-believed that the 6.25 of 2030 was in the deliverable basket, they would have calculated the no-arb price as being around $142. But that bond isn’t in the basket, so the no-arb value of the contract is above $150. Apparently the guy* who didn’t get the memo merrily offered the June future at $142 in the mistaken belief that was near fair value.

Ruh-roh.

After selling quite a few contracts, the memo non-reader wised up, and the price jumped up to over $150, which reflected the real deliverable basket, not the imaginary one.

This price move was “humongous” given that implied vol is around 6 percent. That’s an annualized number, meaning that the move on a single day was more than a one-sigma annual move. I was being very cautious by saying this magnitude move would be expected to occur over weeks or months. But that’s what happens when the reporter catches me in the gym rather than at my computer.

This wasn’t a fat-finger error. This was a fat-head error. It cost somebody a good deal of money, and made some others very happy.

So word up, traders (and programmers): always read the memos from your friendly local exchange.

*Or gal, as Mary Childs pointed out on Twitter.

Print Friendly

October 13, 2014

You Might Have Read This Somewhere Before. Like Here.

The FT has a long article by John Dizard raising alarms about the systemic risks posed by CCPs. The solution, in other words, might be the problem.

Where have I read that before?

The article focuses on a couple of regulatory reports that have also raised the alarm:

No, I am referring to reports filed by the wiring and plumbing inspectors of the CCPs. For example, the International Organization for Securities Commissions (a name that could only be made duller by inserting the word “Canada”) issued a report this month on the “Securities Markets Risk Outlook 2014-2015”. I am not going to attempt to achieve the poetic effect of the volume read as a whole, so I will skip ahead to page 85 to the section on margin calls.

Talking (again) about the last crisis, the authors recount: “When the crisis materialised in 2008, deleveraging occurred, leading to a pro-cyclical margin spiral (see figure 99). Margin requirements also have the potential to cause pro-cyclical effects in the cleared markets.” The next page shows figure 99, an intriguing cartoon of a margin spiral, with haircuts leading to more haircuts leading to “liquidate position”, “further downward pressure” and “loss on open positions”. In short, do not read it to the children before bedtime.

This margin issue is exactly what I’ve been on about for six years now. Good that regulators are finally waking up to it, though it’s a little late in the day, isn’t it?

I chuckle at the children before bedtime line. I often say that I should give my presentations on the systemic risk of CCPs while sitting by a campfire holding a flashlight under my chin.

I don’t chuckle at the fact that other regulators seem rather oblivious to the dangers inherent in what they’ve created:

While supervisory institutions such as the Financial Stability Oversight Council are trying to fit boring old life insurers into their “systemic” regulatory frameworks, they seem to be ignoring the degree to which the much-expanded clearing houses are a threat, not a solution. Much attention has been paid, publicly, to how banks that become insolvent in the future will have their shareholders and creditors bailed in to the losses, their managements dismissed and their corporate forms put into liquidation. But what about the clearing houses? What happens to them when one or more of their participants fail?

I call myself the Clearing Cassandra precisely because I have been prophesying so for years, but the FSOC and others have largely ignored such concerns.

Dizard starts out his piece quoting Dallas Fed President Richard Fisher comparing macroprudential regulation to the Maginot Line. Dizard notes that others have made similar Maginot Line comparisons post-crisis, and says that this is unfair to the Maginot Line because it was never breached: the Germans went around it.

I am one person who has made this comparison specifically in the context of CCPs, most recently at Camp Alphaville in July. But my point was exactly that the creation of impregnable CCPs would result in the diversion of stresses to other parts of the financial system, just like the Maginot line diverted the Germans into the Ardennes, where French defenses were far more brittle. In particular, CCPs are intended to eliminate credit risk, but they do so by creating tremendous demands for liquidity, especially during crisis times. Since liquidity risk is, in my view, far more dangerous than credit risk, this is not obviously a good trade off. The main question becomes: During the next crisis, where will be the financial Sedan?

I take some grim satisfaction that arguments that I have made for years are becoming conventional wisdom, or at least widespread among those who haven’t imbibed the Clearing Kool Aid. Would that have happened before legislators and regulators around the world embarked on the vastest re-engineering of world financial markets ever attempted, and did so with their eyes wide shut.

Print Friendly

October 7, 2014

Manipulation Prosecutions: Going for the Capillaries, Ignoring the Jugular

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,Politics,Regulation — The Professor @ 7:32 pm

The USDOJ has filed criminal charges against a trader named Michael Coscia for “spoofing” CME and ICE futures markets. Frankendodd made spoofing a crime.

What is spoofing? It’s the futures market equivalent of Lucy and the football. A trader submits buy (sell) orders above (below) the inside market in the hope that this convinces other market participants that there is strong demand (supply) for (of) the futures contract. If others are so fooled, they will raise their bids (lower their offers). Right before they do this, the spoofer pulls his orders just like Lucy pulls the football away from Charlie Brown, and then hits (lifts) the higher (lower) bids (offers). If the pre-spoof prices are “right”, the post-spoof bids (offers) are too high (too low), which means the spoofer sells high and buys low.

Is this inefficient? Yeah, I guess. Is it a big deal? Color me skeptical, especially since the activity is self-correcting. The strategy works if those at the inside market, who these days are likely to be HFT firms, consider the away from the market spoofing orders to be informative. But they aren’t. The HFT firms at the inside market who respond to the spoof will lose money. They will soon figure this out, and won’t respond to the spoofs any more: they will deem away-from-the-market orders as uninformative. Problem solved.

But the CFTC (and now DOJ, apparently) are obsessed with this, and other games for ticks. They pursue these activities with Javert-like mania.

What makes this maddening to me is that while obsessing over ticks gained by spoofs or other HFT strategies, regulators have totally overlooked corners that have distorted prices by many, many ticks.

I know of two market operations in the last ten years plausibly involving major corners that have arguably imposed mid-nine figure losses on futures market participants, and in one of the case, possibly ten-figure losses. Yes, we are talking hundreds of millions and perhaps more than a billion. To put things in context, Coscia is alleged to have made a whopping $1.6 million. That is, two or three orders of magnitude less than the losses involved in these corners.

And what have CFTC and DOJ done in these cases? Exactly bupkus. Zip. Nada. Squat.

Why is that? Part of the explanation is that previous CFTC decisions in the 1980s were economically incoherent, and have posed substantial obstacles to winning a verdict: I wrote about this almost 20 years ago, in a Washington & Lee Law Review article. But I doubt that is the entire story, especially since one of the cases is post-Frankendodd, and hence the one of the legal obstacles that the CFTC complains about (relating to proving intent) has been eliminated.

The other part of the story is too big to jail. Both of the entities involved are very major players in their respective markets. Very major. One has been very much in the news lately.

In other words, the CFTC is likely intimidated by-and arguably captured by-those it is intended to police because they are very major players.

The only recent exception I can think of-and by recent, I mean within the last 10 years-is the DOJ’s prosecution of BP for manipulating the propane market. But BP was already in the DOJ’s sights because of the Texas City explosion. Somebody dropped the dime on BP for propane, and DOJ used that to turn up the heat on BP. BP eventually agreed to a deferred prosecution agreement, in which it paid a $100 million fine to the government, and paid $53 million into a restitution fund to compensate any private litigants.

The Commodity Exchange Act specifically proscribes corners. Corners occur. But the CFTC never goes after corners, even if they cost market participants hundreds of millions of dollars. Probably because corners that cost market participants nine or ten figures can only be carried out by firms that can hire very expensive lawyers and who have multiple congressmen and senators on speed dial.

Instead, the regulators go after much smaller fry so they can crow about how tough they are on wrongdoers. They go after shoplifters, and let axe murderers walk free. Going for the capillaries, ignoring the jugular.

All this said, I am not a fan of criminalizing manipulation. Monetary fines-or damages in private litigation-commensurate to the harm imposed will have the appropriate deterrent effect.

The timidity of regulators in going after manipulators is precisely why a private right of action in manipulation cases is extremely important. (Full disclosure: I have served as an expert in such cases.)

One last comment about criminal charges in manipulation cases. The DOJ prosecuted the individual traders in the propane corner. Judge Miller in the Houston Division of the  Southern District of Texas threw out the cases, on the grounds that the Commodity Exchange Act’s anti-manipulation provisions are unconstitutionally vague. Now this is only a district court decision, and the anti-spoofing case will be brought under new provisions of the CEA adopted as the result of Dodd-Frank. Nonetheless, I think it is highly likely that Coscia will raise the same defense (as well as some others). It will be interesting to see how this plays out.

But regardless of how it plays out, regulators’ obsession with HFT games stands in stark contrast with their conspicuous silence on major corner cases. Given that corners can cause major dislocations in markets, and completely undermine the purposes of futures markets-risk transfer and price discovery-this imbalance speaks very ill of the priorities-and the gumption (I cleaned that up)-of those charged with policing US futures markets.

Print Friendly

September 10, 2014

SEFs: The Damn Dogs Won’t Eat It!

Filed under: Derivatives,Economics,Exchanges,Politics,Regulation — The Professor @ 8:37 pm

There’s an old joke about a pet food manufacturer that mounts an all out marketing campaign for its new brand of dog food. It pulls out all the stops. Celebrity endorsements. Super Bowl Ad. You name it. But sales tank. The CEO calls the head of marketing onto the carpet and demands an explanation for the appalling sales. The marketing guy  answers: “It’s those damn dogs. They just won’t eat the stuff.”

That joke came to mind when reading about the CFTC’s frustration at the failure of SEFs to get traction. Most market  participants avoid using central limit order books (CLOBs), and prefer to trade by voice or Requests for Quotes (RFQs):

“The biggest surprise for me is the lack of interest from the buyside for [central limit order books or CLOB],” Michael O’Brien, director of global trading at Eaton Vance, said at the International Swaps and Derivatives Association conference in New York. “The best way to break up the dual market structure and boost transparency is through using a CLOB and I’m surprised at how slow progress has been.”

About two dozen Sefs have been established in the past year, but already some of these venues are struggling to register a presence. Instead, incumbent market players who have always dominated the swaps market are winning under the new regulatory regime, with the bulk of trading being done through Bloomberg, Tradeweb and interdealer brokers including IcapBGC and Tradition.

“It’s still very early,” Mr Massad told the FT. “The fact that we’re getting a decent volume of trading is encouraging but we are also looking at various issues to see how we can facilitate more trading and transparency.”

Regulators are less concerned about having a specific numbers of Sefs since the market is still sorting out which firms can serve their clients the best under the new regulatory system. What officials are watching closely is the continued use of RFQ systems rather than the transparent central order booking structure.

Not to say I told you so, but I told you so. I knew the dogs, and I knew they wouldn’t like the food.

This is why I labeled the SEF mandate as The Worst of Dodd Frank. It was a solution in search of a non-existent problem. It took a one-sized fits all approach, predicated on the view that centralized order driven markets are the best way to execute all transactions. It obsessed on pre-trade and post-trade price transparency, and totally overlooked the importance of counterparty transparency.

There is a diversity of trading mechanisms in virtually every financial market. Some types of trades and traders are economically executed in anonymous, centralized auction markets with pre- and post-trade price transparency. Other types of trades and traders-namely, big wholesale trades involving those trading to hedge or to rebalance portfolios, rather than to take advantage of information advantages-are most efficiently negotiated and executed face-to-face, with little (or delayed) post-trade price disclosure. This is why upstairs block markets always existed in stocks, and why dark pools exist now. It is one reason why OTC derivatives markets operated side-by-side with futures markets offering similar products.

As I noted at the time, sophisticated buy siders in derivatives markets had the opportunity to trade in futures markets but chose to trade OTC. Moreover, the buy side was very resistant to the SEF mandate despite the fact that they were the supposed beneficiaries of a more transparent (in some dimensions!) and more competitive (allegedly) trading mechanism. The Frankendodd crowd argued that SEFs would break a cabal of dealers that exploited their customers and profited from the opacity of the market.

But the customers weren’t buying it. So you had to believe that either they knew what they were talking about, or were the victims of Stockholm Syndrome leaping to the defense of the dealers that held them captive.

My mantra was a diversity of mechanisms for a diversity of trades and traders.  Frankendodd attempts to create a monoculture and impose a standardized market structure for all participants. It says to the buy side: here’s your dinner, and you’ll like it, dammit! It’s good for you!

But the buy side knows what it likes, and is pushing away the bowl.

Print Friendly

July 25, 2014

Benchmark Blues

Pricing benchmarks have been one of the casualties of the financial crisis. Not because the benchmarks-like Libor, Platts’ Brent window, ISDA Fix, the Reuters FX window or the gold fix-contributed in an material way to the crisis. Instead, the post-crisis scrutiny of the financial sector turned over a lot of rocks, and among the vermin crawling underneath were abuses of benchmarks.

Every major benchmark has fallen under deep suspicion, and has been the subject of regulatory action or class action lawsuits. Generalizations are difficult because every benchmark has its own problems. It is sort of like what Tolstoy said about unhappy families: every flawed benchmark is flawed in its own way. Some, like Libor, are vulnerable to abuse because they are constructed from the estimates/reports of interested parties. Others, like the precious metals fixes, are problematic due to a lack of transparency and limited participation. Declining production and large parcel sizes bedevil Brent.

But some basic conclusions can be drawn.

First-and this should have been apparent in the immediate aftermath of the natural gas price reporting scandals of the early-2000s-benchmarks based on the reports of self-interested parties, rather than actual transactions, are fundamentally flawed. In my energy derivatives class I tell the story of AEP, which the government discovered kept a file called “Bogus IFERC.xls” (IFERC being an abbreviation for Inside Ferc, the main price reporting publication for gas and electricity) that included thousands of fake transactions that the utility reported to Platts.

Second, and somewhat depressingly, although benchmarks based on actual transactions are preferable to those based on reports, in many markets the number of transactions is small. Even if transactors do not attempt to manipulate, the limited number of transactions tends to inject some noise into the benchmark value. What’s more, benchmarks based on a small number of transactions can be influenced by a single trade or a small number of trades, thereby creating the potential for manipulation.

I refer to this as the bricks without straw problem. Just like the Jews in Egypt were confounded by Pharoh’s command to make bricks without straw, modern market participants are stymied in their attempts to create benchmarks without trades. This is a major problem in some big markets, notably Libor (where there are few interbank unsecured loans) and Brent (where large parcel sizes and declining Brent production mean that there are relatively few trades: Platts has attempted to address this problem by expanding the eligible cargoes to include Ekofisk, Oseberg, and Forties, and some baroque adjustments based on CFD and spread trades and monthly forward trades). This problem is not amenable to an easy fix.

Third, and perhaps even more depressingly, even transaction-based benchmarks derived from markets with a decent amount of trading activity are vulnerable to manipulation, and the incentive to manipulate is strong. Some changes can be made to mitigate these problems, but they can’t be eliminated through benchmark design alone. Some deterrence mechanism is necessary.

The precious metals fixes provide a good example of this. The silver and gold fixes have historically been based on transactions prices from an auction that Walras would recognize. But participation was limited, and some participants had the market power and the incentive to use it, and have evidently pushed prices to benefit related positions. For instance, in the recent allegation against Barclays, the bank could trade in sufficient volume to move the fix price sufficiently to benefit related positions in digital options. When there is a large enough amount of derivatives positions with payoffs tied to a benchmark, someone has the incentive to manipulate that benchmark, and many have the market power to carry out those manipulations.

The problems with the precious metals fixes have led to their redesign: a new silver fix method has been established and will go into effect next month, and the gold fix will be modified, probably along similar lines. The silver fix will replace the old telephone auction that operated via a few members trading on their own account and representing customer orders with a more transparent electronic auction operated by CME and Reuters. This will address some of the problems with the old fix. In particular, it will reduce the information advantage that the fixing dealers had that allowed them to trade profitably on other markets (e.g.,. gold futures and OTC forwards and options) based on the order flow information they could observe during the auction. Now everyone will be able to observe the auction via a screen, and will be less vulnerable to being picked off in other markets. It is unlikely, however, that the new mechanism will mitigate the market power problem. Big trades will move markets in the new auction, and firms with positions that have payoffs that depend on the auction price may have an incentive to make those big trades to advantage those positions.

Along these lines, it is important to note that many liquid and deep futures markets have been plagued by “bang the close” problems. For instance, Amaranth traded large volumes in the settlement period of expiring natural gas futures during three months of 2006 in order to move prices in ways that benefited its OTC swaps positions. The CFTC recently settled with the trading firm Optiver that allegedly banged the close in crude, gasoline, and heating oil in March, 2007. These are all liquid and deep markets, but are still vulnerable to “bullying” (as one Optiver trader characterized it) by large traders.

The incentives to cause an artificial price for any major benchmark will always exist, because one of the main purposes of benchmarks is to provide a mechanisms for determining cash flows for derivatives. The benchmark-derivatives market situation resembles an inverted pyramid, with large amounts cash flows from derivatives trades resting on a relatively small number of spot transactions used to set the benchmark value.

One way to try to ameliorate this problem is to expand the number of transactions at the point of the pyramid by expanding the window of time over which transactions are collected for the purpose of calculating the benchmark value: this has been suggested for the Platts Brent market, and for the FX fix. A couple of remarks. First, although this would tend to mitigate market power, it may not be sufficient to eliminate the problem: Amaranth manipulated a price that was based on a VWAP over a relatively long 30 minute interval. In contrast, in the Moore case (a manipulation case involving platinum and palladium brought by the CFTC) and Optiver, the windows were only two minutes long. Second, there are some disadvantages of widening the window. Some market participants prefer a benchmark that reflects a snapshot of the market at a point in time, rather than an average over a period of time. This is why Platts vociferously resists calls to extend the duration of its pricing window. There is a tradeoff in sources of noise. A short window is more affected by the larger sampling error inherent in the smaller number of transactions that occurs in a shorter interval, and the noise resulting from greater susceptibility to manipulation when a benchmark is based on smaller number of trades. However, an average taken over a time interval is a noisy estimate of the price at any point of time during that interval due to the random fluctuations in the “true” price driven by information flow. I’ve done some numerical experiments, and either the sampling error/manipulation noise has to be pretty large, or the volatility of the “true” price must be pretty low for it to be desirable to move to a longer interval.

Other suggestions include encouraging diversity in benchmarks. The other FSB-the Financial Stability Board-recommends this. Darrel Duffie and Jeremy Stein lay out the case here (which is a lot easier read than the 750+ pages of the longer FSB report).

Color me skeptical. Duffie and Stein recognize that the market has a tendency to concentrate on a single benchmark. It is easier to get into and out of positions in a contract which is similar to what everyone else is trading. This leads to what Duffie and Stein call “the agglomeration effect,” which I would refer to as a “tipping” effect: the market tends to tip to a single benchmark. This is what happened in Libor. Diversity is therefore unlikely in equilibrium, and the benchmark that survives is likely to be susceptible to either manipulation, or the bricks without straw problem.

Of course not all potential benchmarks are equally susceptible. So it would be good if market participants coordinated on the best of the possible alternatives. As Duffie and Stein note, there is no guarantee that this will be the case. This brings to mind the as yet unresolved debate over standard setting generally, in which some argue that the market’s choice of VHS over the allegedly superior Betamax technology, or the dominance of QWERTY over the purportedly better Dvorak keyboard (or Word vs. Word Perfect) demonstrate that the selection of a standard by a market process routinely results in a suboptimal outcome, but where others (notably Stan Lebowitz and Stephen Margolis) argue that  these stories of market failure are fairy tales that do not comport with the actual histories. So the relevance of the “bad standard (benchmark) market failure” is very much an open question.

Darrel and Jeremy suggest that a wise government can make things better:

This is where national policy makers come in. By speaking publicly about the advantages of reform — or, if necessary, by using their power to regulate — they can guide markets in the desired direction. In financial benchmarks as in tap water, markets might not reach the best solution on their own.

Putting aside whether government regulators are indeed so wise in their judgments, there is  the issue of how “better” is measured. Put differently: governments may desire a different direction than market participants.

Take one of the suggestions that Duffie and Stein raise as an alternative to Libor: short term Treasuries. It is almost certainly true that there is more straw in the Treasury markets than in any other rates market. Thus, a Treasury bill-based benchmark is likely to be less susceptible to manipulation than any other market. (Though not immune altogether, as the Pimco episode in June ’05 10 Year T-notes, the squeezes in the long bond in the mid-to-late-80s, the Salomon 2 year squeeze in 92, and the chronic specialness in some Treasury issues prove.)

But that’s not of much help if the non-manipulated benchmark is not representative of the rates that market participants want to hedge. Indeed, when swap markets started in the mid-80s, many contracts used Treasury rates to set the floating leg. But the basis between Treasury rates, and the rates at which banks borrowed and lent, was fairly variable. So a Treasury-based swap contract had more basis risk than Libor-based contracts. This is precisely why the market moved to Libor, and when the tipping process was done, Libor was the dominant benchmark not just for derivatives but floating rate loans, mortgages, etc.

Thus, there may be a trade-off between basis risk and susceptibility to manipulation (or to noise arising from sampling error due to a small number of transactions or averaging over a wide time window). Manipulation can lead to basis risk, but it can be smaller than the basis risk arising from a quality mismatch (e.g., a credit risk mismatch between default risk-free Treasury rates and a defaultable rate that private borrowers pay). I would wager that regulators would prefer a standard that is less subject to manipulation, even if it has more basis risk, because they don’t internalize the costs associated with basis risk. Market participants may have a very different opinion. Therefore, the “desired direction” may depend very much on whom you ask.

Putting all this together, I conclude we live in a fallen world. There is no benchmark Eden. Benchmark problems are likely to be chronic for the foreseeable future. And beyond. Some improvements are definitely possible, but benchmarks will always be subject to abuse. Their very source of utility-that they are a visible price that can be used to determine payoffs on vast sums of other contracts-always provides a temptation to manipulate.

Moving to transactions-based mechanisms eliminates outright lying as a manipulation strategy, but it does not eliminate the the potential for market power abuses. The benchmarks that would be least vulnerable to market power abuses are not necessarily the ones that best reflect the exposures that market participants face.

Thus, we cannot depend on benchmark design alone to address manipulation problems. The means, motive, and opportunity to manipulate even transactions-based benchmarks will endure. This means that reducing the frequency of manipulation requires some sort of deterrence mechanism, either through government action (as in the Libor, Optiver, Moore, and Amaranth cases) or private litigation (examples of which include all the aforementioned cases, plus some more, like Brent).  It will not be possible to “solve” the benchmark problems by designing better mechanisms, then riding off into the sunset like the Lone Ranger. Our work here will never be done, Kimo Sabe.*

* Stream of consciousness/biographical detail of the day. The phrase “Kimo Sabe” was immortalized by Jay Silverheels-Tonto in the original Lone Ranger TV series. My GGGGF, Abel Sherman, was slain and scalped by an Indian warrior named Silverheels during the Indian War in Ohio in 1794. Silverheels made the mistake of bragging about his feat to a group of lumbermen, who just happened to include Abel’s son. Silverheels was found dead on a trail in the woods the next day, shot through the heart. Abel (a Revolutionary War vet) was reputedly the last white man slain by Indians in Washington County, OH. His tombstone is on display in the Campus Martius museum in Marietta. The carving on the headstone is very un-PC. It reads:

Here lyes the body of Abel Sherman who fell by the hand of the Savage on the 15th of August 1794, and in the 50th year of  his age.

Here’s a picture of it:

OLYMPUS DIGITAL CAMERA

The stream by which Abel was killed is still known as Dead Run, or Dead Man’s Run.

Print Friendly

July 21, 2014

Doing Due Diligence in the Dark

Filed under: Exchanges,HFT,Regulation — The Professor @ 8:39 pm

Scott Patterson, WSJ reporter and the author of Dark Pools, has a piece in today’s journal about the Barclays LX story. He finds, lo and behold, that several users of the pool had determined that they were getting poor executions:

Trading firms and employees raised concerns about high-speed traders at Barclays PLC’s dark pool months before the New York attorney general alleged in June that the firm lied to clients about the extent of predatory trading activity on the electronic trading venue, according to people familiar with the firms.

Some big trading outfits noticed their orders weren’t getting the best treatment on the dark pool, said people familiar with the trading. The firms began to grow concerned that the poor results resulted from high-frequency trading, the people said.

In response, at least two firms—RBC Capital Markets and T. Rowe Price Group Inc —boosted the minimum number of shares they would trade on the dark pool, letting them dodge high-speed traders, who often trade in small chunks of 100 or 200 shares, the people said.

This relates directly to a point that I made in my post on the Barclays story. Trading is an experience good. Dark pool customers can evaluate the quality of their executions. If a pool is not screening out opportunistic traders, execution costs will be high relative to other venues who do a better job of screening, and users who monitor their execution costs will detect this. Regardless of what a dark pool operator says about what it is doing, the proof of the pudding is in the trading, as it were.

The Patterson article shows that at least some buy side firms do the necessary analysis, and can detect a pool that does not exclude toxic flows.

This long FT piece relies extensively on quotes from Hisander Misra, one of the founders of Chi-X, to argue that many fund managers have been ignorant of the quality of executions they get on dark pools. The article talked to two anonymous fund managers who say they don’t know how dark pools work.

The stated implication here is that regulation is needed to protect the buy side from unscrupulous pool operators.

A couple of comments. First, not knowing how a pool works doesn’t really matter. Measures of execution quality are what matter, and these can be measured. I don’t know all of the technical details of the operation of my car or the computer I am using, but I can evaluate their performances, and that’s what matters.

Second, this is really a cost-benefit issue. Monitoring of performance is costly. But so is regulation and litigation. Given that market participants have the biggest stake in measuring pool performance properly, and can develop more sophisticated metrics, there are strong arguments in favor of relying on monitoring.  Regulators can, perhaps, see whether a dark pool does what it advertises it will do, but this is often irrelevant because it does not necessarily correspond closely to pool execution costs, which is what really matters.

Interestingly, one of the things that got a major dark pool (Liquidnet) in trouble was that it shared information about the identities of existing clients with prospective clients. This presents interesting issues. Sharing such information could economize on monitoring costs. If a a big firm (like a T. Rowe) trades in a pool, this can signal to other potential users that the pool does a good job of screening out the opportunistic. This allows them to free ride off the monitoring efforts of the big firm, which economizes on monitoring costs.

Another illustration of how things are never simple and straightforward when analyzing market structure.

One last point. Some of the commentary I’ve read recently uses the prevalence of HFT volume in a dark pool as a proxy for how much opportunistic trading goes on in the pool. This is a very dangerous shortcut, because as I (and others) have written repeatedly, there is all different kinds of HFT. Some adds to liquidity, some consumes it, and some may be outright toxic/predatory. Market-making HFT can enhance dark pool liquidity, which is probably why dark pools encourage HFT participation. Indeed, it is hard to understand how a pool could benefit from encouraging the participation of predatory HFT, especially if it lets such firms trade for free. This drives away the paying customers, particularly when the paying customers evaluate the quality of their executions.

Evaluating execution quality and cost could be considered a form of institutional trader due diligence. Firms that do so can protect themselves-and their investor-clients-from opportunistic counterparties. Even though the executions are done in the dark, it is possible to shine a light on the results. The WSJ piece shows that many firms do just that. The question of whether additional regulation is needed boils down to the question of whether the cost and efficacy of these self-help efforts is superior to that of regulation.

Print Friendly

July 15, 2014

Oil Futures Trading In Troubled Waters

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,HFT,Regulation — The Professor @ 7:16 pm

A recent working paper by Pradeep Yadav, Michel Robe and Vikas Raman tackles a very interesting issue: do electronic market makers (EMMs, typically HFT firms) supply liquidity differently than locals on the floor during its heyday? The paper has attracted a good deal of attention, including this article in Bloomberg.

The most important finding is that EMMs in crude oil futures do tend to reduce liquidity supply during high volatility/stressed periods, whereas crude futures floor locals did not. They explain this by invoking an argument I did 20 years ago in my research comparing the liquidity of floor-based LIFFE to the electronic DTB: the anonymity of electronic markets makes market makers there more vulnerable to adverse selection. From this, the authors conclude that an obligation to supply liquidity may be desirable.

These empirical conclusions seem supported by the data, although as I describe below the scant description of the methodology and some reservations based on my knowledge of the data make me somewhat circumspect in my evaluation.

But my biggest problem with the paper is that it seems to miss the forest for the trees. The really interesting question is whether electronic markets are more liquid than floor markets, and whether the relative liquidity in electronic and floor markets varies between stressed and non-stressed markets. The paper provides some intriguing results that speak to that question, but then the authors ignore it altogether.

Specifically, Table 1 has data on spreads in from the electronic NYMEX crude oil market in 2011, and from the floor NYMEX crude oil market in 2006. The mean and median spreads in the electronic market: .01 percent. Given a roughly $100 price, this corresponds to one tick ($.01) in the crude oil market. The mean and median spreads in the floor market: .35 percent and .25 percent, respectively.

Think about that for a minute. Conservatively, spreads were 25 times higher in the floor market. Even adjusting for the fact that prices in 2011 were almost double than in 2006, we’re talking a 12-fold difference in absolute (rather than percentage) spreads. That is just huge.

So even if EMMs are more likely to run away during stressed market conditions, the electronic market wins hands down in the liquidity race on average. Hell, it’s not even a race. Indeed, the difference is so large I have a hard time believing it, which raises questions about the data and methodologies.

This raises another issue with the paper. The paper compares at the liquidity supply mechanism in electronic and floor markets. Specifically, it examines the behavior of market makers in the two different types of markets. What we are really interested is the outcome of these mechanisms. Therefore, given the rich data set, the authors should compare measures of liquidity in stressed and non-stressed periods, and make comparisons between the electronic and floor markets. What’s more, they should examine a variety of different liquidity measures. There are multiple measures of spreads, some of which specifically measure adverse selection costs. It would be very illuminating to see those measures across trading mechanisms and market environments. Moreover, depth and price impact are also relevant. Let’s see those comparisons too.

It is quite possible that the ratio of liquidity measures in good and bad times is worse in electronic trading than on the floor, but in any given environment, the electronic market is more liquid. That’s what we really want to know about, but the paper is utterly silent on this. I find that puzzling and rather aggravating, actually.

Insofar as the policy recommendation is concerned, as I’ve been writing since at least 2010, the fact that market makers withdraw supply during periods of market stress does not necessarily imply that imposing obligations to make markets even during stressed periods is efficiency enhancing. Such obligations force market makers to incur losses when the constraints bind. Since entry into market making is relatively free, and the market is likely to be competitive (the paper states that there are 52 active EMMS in the sample), raising costs in some state of the world, and reducing returns to market making in these states, will lead to the exit of market making capacity. This will reduce liquidity during unstressed periods, and could even lead to less liquidity supply in stressed periods: fewer firms offering more liquidity than they would otherwise choose due to an obligation may supply less liquidity in aggregate than a larger number of firms that can each reduce liquidity supply during stressed periods (because they are not obligated to supply a minimum amount of liquidity).

In other words, there is no free lunch. Even assuming that EMMs are more likely to reduce supply during stressed periods than locals, it does not follow that a market making obligation is desirable in electronic environments. The putatively higher cost of supplying liquidity in an electronic environment is a feature of that environment. Requiring EMMs to bear that cost means that they have to recoup it at other times. Higher cost is higher cost, and the piper must be paid. The finding of the paper may be necessary to justify a market maker obligation, but it is clearly not sufficient.

There are some other issues that the authors really need to address. The descriptions of the methodologies in the paper are far too scanty. I don’t believe that I could replicate their analysis based on the description in the paper. As an example, they say “Bid-Ask Spreads are calculated as in the prior literature.” Well, there are many papers, and many ways of calculating spreads. Hell, there are multiple measures of spreads. A more detailed statement of the actual calculation is required in order to know exactly what was done, and to replicate it or to explore alternatives.

Comparisons between electronic and open outcry markets are challenging because the nature of the data are very different. We can observe the order book at every instant of time in an electronic market. We can also sequence everything-quotes, cancellations and trades-with exactitude. (In futures markets, anyways. Due to the lack of clock synchronization across trading venues, this is a problem in a fragmented market like US equities.) These factors mean that it is possible to see whether EMMs take liquidity or supply it: since we can observe the quote, we know that if an EMM sells (buys) at the offer (bid) it is supplying liquidity, but if it buys (sells) at the offer (bid) it is consuming liquidity.

Things are not nearly so neat in floor trading data. I have worked quite a bit with exchange Street Books. They convey much less information than the order book and the record of executed trades in electronic markets like Globex. Street Books do not report the prevailing bids and offers, so I don’t see how it is possible to determine definitively whether a local is supplying or consuming liquidity in a particular trade. The mere fact that a local (CTI1) is trading with a customer (CTI4) does not mean the local is supplying liquidity: he could be hitting the bid/lifting the offer of a customer limit order, but since we can’t see order type, we don’t know. Moreover, even to the extent that there are some bids and offers in the time and sales record, they tend to be incomplete (especially during fast markets) and time sequencing is highly problematic. I just don’t see how it is possible to do an apples-to-apples comparison of liquidity supply (and particularly the passivity/aggressiveness of market makers) between floor and electronic markets just due to the differences in data. Nonetheless, the paper purports to do that. Another reason to see more detailed descriptions of methodology and data.

One red flag that indicates that the floor data may have some problems. The reported maximum bid-ask spread in the floor sample is 26.48 percent!!! 26.48 percent? Really? The 75th percentile spread is .47 percent. Given a $60 price, that’s almost 30 ticks. Color me skeptical. Another reason why a much more detailed description of methodologies is essential.

Another technical issue is endogeneity. Liquidity affects volatility, but the paper uses volatility as one of its measures of stressed markets in its study of how stress affects liquidity. This creates an endogeneity (circularity, if you will) problem. It would be preferable to use some instrument for stressed market conditions. Instruments are always hard to come up with, and I don’t have one off the top of my head, but Yanev et al should give some serious thought to identifying/creating such an instrument.

Moreover, the main claim of the paper is that EMMs’ liquidity supply is more sensitive to the toxicity of order flow than locals’ liquidity supply. The authors use order imbalance (CTI4 buys minus CTI4 sells, or the absolute value thereof more precisely), which is one measure of toxicity, but there are others. I would prefer a measure of customer (CTI4) alpha. Toxic (i.e., informed) order flow predicts future price movements, and hence when customer orders realize high alphas, it is likely that customers are more informed than usual and earn positive alphas. It would therefore be interesting to see the sensitivities of liquidity supply in the different trading environments to order flow toxicity as measured by CTI4 alphas.

I will note yet again that market maker actions to cut liquidity supply when adverse selection problems are severe is not necessarily a bad thing. Informed trading can be a form of rent seeking, and if EMMs are better able to detect informed trading and withdraw liquidity when informed trading is rampant, this form of rent seeking may be mitigated. Thus, greater sensitivity to toxicity could be a feature, not a bug.

All that said, I consider this paper a laudable effort that asks serious questions, and attempts to answer them in a rigorous way. The results are interesting and plausible, but the sketchy descriptions of the methodologies gives me reservations about these results. But by far the biggest issue is that of the forest and trees. What is really interesting is whether electronic markets are more or less liquid in different market environments than floor markets. Even if liquidity supply is flightier in electronic markets, they can still outperform floor based markets in both unstressed and stressed environments. The huge disparity in spreads reported in the paper suggests a vast difference in liquidity on average, which suggests a vast difference in liquidity in all different market environments, stressed and unstressed. What we really care about is liquidity outcomes, as measured by spreads, depth, price impact, etc. This is the really interesting issue, but one that the paper does not explore.

But that’s the beauty of academic research, right? Milking the same data for multiple papers. So I suggest that Pradeep, Michel and Vikas keep sitting on that milking stool and keep squeezing that . . . data ;-) Or provide the data to the rest of us out their and let us give it a tug.

Print Friendly

Next Page »

Powered by WordPress