Streetwise Professor

October 24, 2014

Someone Didn’t Get the Memo, and I Wouldn’t Want to be That Guy*

Filed under: Commodities,Derivatives,Economics,Exchanges — The Professor @ 9:23 am

Due to the five year gap in 30 year bond issuance, in mid-September the CME revised the deliverable basket for the June 2015 T-bond contract. It deleted the 6.25 of May, 2015 because its delivery value would have been so far below the values of the other bonds in the deliverable set. This would have made the contract more susceptible to a squeeze because only that bond would effectively be available for delivery due to the way the contract works.

The CME issued a memo on the subject.

Somebody obviously didn’t get it:

It looks like a Treasury futures trader failed to do his or her homework.

The price of 30-year Treasury futures expiring in June traded for less than 145 for about two hours yesterday before shooting up to more than 150. The 7.3 percent surge in their price yesterday, on the first day these particular contracts were traded, was unprecedented for 30-year Treasury futures, according to data compiled by Bloomberg. Volume amounted to 1,639 contracts with a notional value of $164 million.

What sets these futures apart from others is they’re the first ones where the U.S. government’s decision to stop issuing 30-year bonds from 2001 to 2006 must be accounted for when valuing the derivatives. The size and speed of yesterday’s jump indicates the initial traders of the contracts hadn’t factored in the unusual rules governing these particular products, said Craig Pirrong, a finance professor at the University of Houston.

“That is humongous,” said Pirrong, referring to the 7.3 percent jump. “We’re talking about a move you might see over weeks or a month occur in a day.” Pirrong said he suspected it was an algorithmic trader using an outdated model. “I would not want to be that guy,” the professor said.

Here’s a quick and dirty explanation. Multiple bonds are eligible for delivery. Since they have different coupons and maturities, their prices can differ substantially. For instance, the 3.5 percent of Feb 39 sells for about $110, and the 6.25 of 2030 sells for about $146. If both bonds were deliverable at par, no one would ever deliver the 6.25 of 2030, and it would not contribute in any real way to deliverable supply. Therefore, the CME effectively handicaps the delivery race by assigning conversion factors to each bond. The conversion factor is essentially the bond’s price on the delivery date assuming it yields 6 percent to maturity. If all bonds yield 6 percent, their delivery values would be equal, and the deliverable supply would be the total amount of deliverable bonds outstanding. This would make it almost impossible to squeeze the market.

Since bonds differ in duration, and actual yields differ from 6 percent, the conversion factors narrow but do not eliminate disparities in the delivery values of bonds. One bond will be cheapest-to-deliver. Roughly speaking, the CTD bond will be the one with the lowest ratio of price to conversion factor.

That’s where the problem comes in. For the June contract, if the 6.25 of 2030 was eligible to deliver, its converted price would be around $142. Due to the issuance/maturity gap, the converted prices of all the other bonds is substantially higher, ranging between $154 and $159.

This is due a duration effect. When yields are below 6 percent, and they are now way below, at less than 3 percent, low duration bonds become CTD: the prices of low duration bonds rise less (in percentage terms) for a given decline in yields than the prices of high duration bonds, so they become relatively cheaper. The 6.25 of 2030 has a substantially lower duration than the other bonds in the deliverable basket because of its lower maturity (more than 5 years) and higher coupon. So it would have been cheapest to deliver by a huge margin had CME allowed it to remain in the basket. This would have shrunk the deliverable supply to the amount outstanding of that bond, making a squeeze more likely, and more profitable. (And squeezes in Treasuries do occur. They were rife in the mid-to-late-80s, and there was a squeeze of the Ten Year in June of 2005. The 2005 squeeze, which was pretty gross, occurred when there was less than a $1 difference in delivery values between the CTD and the next-cheapest. The squeezer distorted prices by about 15/32s.)

The futures contract prices the CTD bond. So if someone-or someone’s algo-believed that the 6.25 of 2030 was in the deliverable basket, they would have calculated the no-arb price as being around $142. But that bond isn’t in the basket, so the no-arb value of the contract is above $150. Apparently the guy* who didn’t get the memo merrily offered the June future at $142 in the mistaken belief that was near fair value.

Ruh-roh.

After selling quite a few contracts, the memo non-reader wised up, and the price jumped up to over $150, which reflected the real deliverable basket, not the imaginary one.

This price move was “humongous” given that implied vol is around 6 percent. That’s an annualized number, meaning that the move on a single day was more than a one-sigma annual move. I was being very cautious by saying this magnitude move would be expected to occur over weeks or months. But that’s what happens when the reporter catches me in the gym rather than at my computer.

This wasn’t a fat-finger error. This was a fat-head error. It cost somebody a good deal of money, and made some others very happy.

So word up, traders (and programmers): always read the memos from your friendly local exchange.

*Or gal, as Mary Childs pointed out on Twitter.

Print Friendly

October 13, 2014

You Might Have Read This Somewhere Before. Like Here.

The FT has a long article by John Dizard raising alarms about the systemic risks posed by CCPs. The solution, in other words, might be the problem.

Where have I read that before?

The article focuses on a couple of regulatory reports that have also raised the alarm:

No, I am referring to reports filed by the wiring and plumbing inspectors of the CCPs. For example, the International Organization for Securities Commissions (a name that could only be made duller by inserting the word “Canada”) issued a report this month on the “Securities Markets Risk Outlook 2014-2015”. I am not going to attempt to achieve the poetic effect of the volume read as a whole, so I will skip ahead to page 85 to the section on margin calls.

Talking (again) about the last crisis, the authors recount: “When the crisis materialised in 2008, deleveraging occurred, leading to a pro-cyclical margin spiral (see figure 99). Margin requirements also have the potential to cause pro-cyclical effects in the cleared markets.” The next page shows figure 99, an intriguing cartoon of a margin spiral, with haircuts leading to more haircuts leading to “liquidate position”, “further downward pressure” and “loss on open positions”. In short, do not read it to the children before bedtime.

This margin issue is exactly what I’ve been on about for six years now. Good that regulators are finally waking up to it, though it’s a little late in the day, isn’t it?

I chuckle at the children before bedtime line. I often say that I should give my presentations on the systemic risk of CCPs while sitting by a campfire holding a flashlight under my chin.

I don’t chuckle at the fact that other regulators seem rather oblivious to the dangers inherent in what they’ve created:

While supervisory institutions such as the Financial Stability Oversight Council are trying to fit boring old life insurers into their “systemic” regulatory frameworks, they seem to be ignoring the degree to which the much-expanded clearing houses are a threat, not a solution. Much attention has been paid, publicly, to how banks that become insolvent in the future will have their shareholders and creditors bailed in to the losses, their managements dismissed and their corporate forms put into liquidation. But what about the clearing houses? What happens to them when one or more of their participants fail?

I call myself the Clearing Cassandra precisely because I have been prophesying so for years, but the FSOC and others have largely ignored such concerns.

Dizard starts out his piece quoting Dallas Fed President Richard Fisher comparing macroprudential regulation to the Maginot Line. Dizard notes that others have made similar Maginot Line comparisons post-crisis, and says that this is unfair to the Maginot Line because it was never breached: the Germans went around it.

I am one person who has made this comparison specifically in the context of CCPs, most recently at Camp Alphaville in July. But my point was exactly that the creation of impregnable CCPs would result in the diversion of stresses to other parts of the financial system, just like the Maginot line diverted the Germans into the Ardennes, where French defenses were far more brittle. In particular, CCPs are intended to eliminate credit risk, but they do so by creating tremendous demands for liquidity, especially during crisis times. Since liquidity risk is, in my view, far more dangerous than credit risk, this is not obviously a good trade off. The main question becomes: During the next crisis, where will be the financial Sedan?

I take some grim satisfaction that arguments that I have made for years are becoming conventional wisdom, or at least widespread among those who haven’t imbibed the Clearing Kool Aid. Would that have happened before legislators and regulators around the world embarked on the vastest re-engineering of world financial markets ever attempted, and did so with their eyes wide shut.

Print Friendly

October 7, 2014

Manipulation Prosecutions: Going for the Capillaries, Ignoring the Jugular

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,Politics,Regulation — The Professor @ 7:32 pm

The USDOJ has filed criminal charges against a trader named Michael Coscia for “spoofing” CME and ICE futures markets. Frankendodd made spoofing a crime.

What is spoofing? It’s the futures market equivalent of Lucy and the football. A trader submits buy (sell) orders above (below) the inside market in the hope that this convinces other market participants that there is strong demand (supply) for (of) the futures contract. If others are so fooled, they will raise their bids (lower their offers). Right before they do this, the spoofer pulls his orders just like Lucy pulls the football away from Charlie Brown, and then hits (lifts) the higher (lower) bids (offers). If the pre-spoof prices are “right”, the post-spoof bids (offers) are too high (too low), which means the spoofer sells high and buys low.

Is this inefficient? Yeah, I guess. Is it a big deal? Color me skeptical, especially since the activity is self-correcting. The strategy works if those at the inside market, who these days are likely to be HFT firms, consider the away from the market spoofing orders to be informative. But they aren’t. The HFT firms at the inside market who respond to the spoof will lose money. They will soon figure this out, and won’t respond to the spoofs any more: they will deem away-from-the-market orders as uninformative. Problem solved.

But the CFTC (and now DOJ, apparently) are obsessed with this, and other games for ticks. They pursue these activities with Javert-like mania.

What makes this maddening to me is that while obsessing over ticks gained by spoofs or other HFT strategies, regulators have totally overlooked corners that have distorted prices by many, many ticks.

I know of two market operations in the last ten years plausibly involving major corners that have arguably imposed mid-nine figure losses on futures market participants, and in one of the case, possibly ten-figure losses. Yes, we are talking hundreds of millions and perhaps more than a billion. To put things in context, Coscia is alleged to have made a whopping $1.6 million. That is, two or three orders of magnitude less than the losses involved in these corners.

And what have CFTC and DOJ done in these cases? Exactly bupkus. Zip. Nada. Squat.

Why is that? Part of the explanation is that previous CFTC decisions in the 1980s were economically incoherent, and have posed substantial obstacles to winning a verdict: I wrote about this almost 20 years ago, in a Washington & Lee Law Review article. But I doubt that is the entire story, especially since one of the cases is post-Frankendodd, and hence the one of the legal obstacles that the CFTC complains about (relating to proving intent) has been eliminated.

The other part of the story is too big to jail. Both of the entities involved are very major players in their respective markets. Very major. One has been very much in the news lately.

In other words, the CFTC is likely intimidated by-and arguably captured by-those it is intended to police because they are very major players.

The only recent exception I can think of-and by recent, I mean within the last 10 years-is the DOJ’s prosecution of BP for manipulating the propane market. But BP was already in the DOJ’s sights because of the Texas City explosion. Somebody dropped the dime on BP for propane, and DOJ used that to turn up the heat on BP. BP eventually agreed to a deferred prosecution agreement, in which it paid a $100 million fine to the government, and paid $53 million into a restitution fund to compensate any private litigants.

The Commodity Exchange Act specifically proscribes corners. Corners occur. But the CFTC never goes after corners, even if they cost market participants hundreds of millions of dollars. Probably because corners that cost market participants nine or ten figures can only be carried out by firms that can hire very expensive lawyers and who have multiple congressmen and senators on speed dial.

Instead, the regulators go after much smaller fry so they can crow about how tough they are on wrongdoers. They go after shoplifters, and let axe murderers walk free. Going for the capillaries, ignoring the jugular.

All this said, I am not a fan of criminalizing manipulation. Monetary fines-or damages in private litigation-commensurate to the harm imposed will have the appropriate deterrent effect.

The timidity of regulators in going after manipulators is precisely why a private right of action in manipulation cases is extremely important. (Full disclosure: I have served as an expert in such cases.)

One last comment about criminal charges in manipulation cases. The DOJ prosecuted the individual traders in the propane corner. Judge Miller in the Houston Division of the  Southern District of Texas threw out the cases, on the grounds that the Commodity Exchange Act’s anti-manipulation provisions are unconstitutionally vague. Now this is only a district court decision, and the anti-spoofing case will be brought under new provisions of the CEA adopted as the result of Dodd-Frank. Nonetheless, I think it is highly likely that Coscia will raise the same defense (as well as some others). It will be interesting to see how this plays out.

But regardless of how it plays out, regulators’ obsession with HFT games stands in stark contrast with their conspicuous silence on major corner cases. Given that corners can cause major dislocations in markets, and completely undermine the purposes of futures markets-risk transfer and price discovery-this imbalance speaks very ill of the priorities-and the gumption (I cleaned that up)-of those charged with policing US futures markets.

Print Friendly

September 10, 2014

SEFs: The Damn Dogs Won’t Eat It!

Filed under: Derivatives,Economics,Exchanges,Politics,Regulation — The Professor @ 8:37 pm

There’s an old joke about a pet food manufacturer that mounts an all out marketing campaign for its new brand of dog food. It pulls out all the stops. Celebrity endorsements. Super Bowl Ad. You name it. But sales tank. The CEO calls the head of marketing onto the carpet and demands an explanation for the appalling sales. The marketing guy  answers: “It’s those damn dogs. They just won’t eat the stuff.”

That joke came to mind when reading about the CFTC’s frustration at the failure of SEFs to get traction. Most market  participants avoid using central limit order books (CLOBs), and prefer to trade by voice or Requests for Quotes (RFQs):

“The biggest surprise for me is the lack of interest from the buyside for [central limit order books or CLOB],” Michael O’Brien, director of global trading at Eaton Vance, said at the International Swaps and Derivatives Association conference in New York. “The best way to break up the dual market structure and boost transparency is through using a CLOB and I’m surprised at how slow progress has been.”

About two dozen Sefs have been established in the past year, but already some of these venues are struggling to register a presence. Instead, incumbent market players who have always dominated the swaps market are winning under the new regulatory regime, with the bulk of trading being done through Bloomberg, Tradeweb and interdealer brokers including IcapBGC and Tradition.

“It’s still very early,” Mr Massad told the FT. “The fact that we’re getting a decent volume of trading is encouraging but we are also looking at various issues to see how we can facilitate more trading and transparency.”

Regulators are less concerned about having a specific numbers of Sefs since the market is still sorting out which firms can serve their clients the best under the new regulatory system. What officials are watching closely is the continued use of RFQ systems rather than the transparent central order booking structure.

Not to say I told you so, but I told you so. I knew the dogs, and I knew they wouldn’t like the food.

This is why I labeled the SEF mandate as The Worst of Dodd Frank. It was a solution in search of a non-existent problem. It took a one-sized fits all approach, predicated on the view that centralized order driven markets are the best way to execute all transactions. It obsessed on pre-trade and post-trade price transparency, and totally overlooked the importance of counterparty transparency.

There is a diversity of trading mechanisms in virtually every financial market. Some types of trades and traders are economically executed in anonymous, centralized auction markets with pre- and post-trade price transparency. Other types of trades and traders-namely, big wholesale trades involving those trading to hedge or to rebalance portfolios, rather than to take advantage of information advantages-are most efficiently negotiated and executed face-to-face, with little (or delayed) post-trade price disclosure. This is why upstairs block markets always existed in stocks, and why dark pools exist now. It is one reason why OTC derivatives markets operated side-by-side with futures markets offering similar products.

As I noted at the time, sophisticated buy siders in derivatives markets had the opportunity to trade in futures markets but chose to trade OTC. Moreover, the buy side was very resistant to the SEF mandate despite the fact that they were the supposed beneficiaries of a more transparent (in some dimensions!) and more competitive (allegedly) trading mechanism. The Frankendodd crowd argued that SEFs would break a cabal of dealers that exploited their customers and profited from the opacity of the market.

But the customers weren’t buying it. So you had to believe that either they knew what they were talking about, or were the victims of Stockholm Syndrome leaping to the defense of the dealers that held them captive.

My mantra was a diversity of mechanisms for a diversity of trades and traders.  Frankendodd attempts to create a monoculture and impose a standardized market structure for all participants. It says to the buy side: here’s your dinner, and you’ll like it, dammit! It’s good for you!

But the buy side knows what it likes, and is pushing away the bowl.

Print Friendly

July 25, 2014

Benchmark Blues

Pricing benchmarks have been one of the casualties of the financial crisis. Not because the benchmarks-like Libor, Platts’ Brent window, ISDA Fix, the Reuters FX window or the gold fix-contributed in an material way to the crisis. Instead, the post-crisis scrutiny of the financial sector turned over a lot of rocks, and among the vermin crawling underneath were abuses of benchmarks.

Every major benchmark has fallen under deep suspicion, and has been the subject of regulatory action or class action lawsuits. Generalizations are difficult because every benchmark has its own problems. It is sort of like what Tolstoy said about unhappy families: every flawed benchmark is flawed in its own way. Some, like Libor, are vulnerable to abuse because they are constructed from the estimates/reports of interested parties. Others, like the precious metals fixes, are problematic due to a lack of transparency and limited participation. Declining production and large parcel sizes bedevil Brent.

But some basic conclusions can be drawn.

First-and this should have been apparent in the immediate aftermath of the natural gas price reporting scandals of the early-2000s-benchmarks based on the reports of self-interested parties, rather than actual transactions, are fundamentally flawed. In my energy derivatives class I tell the story of AEP, which the government discovered kept a file called “Bogus IFERC.xls” (IFERC being an abbreviation for Inside Ferc, the main price reporting publication for gas and electricity) that included thousands of fake transactions that the utility reported to Platts.

Second, and somewhat depressingly, although benchmarks based on actual transactions are preferable to those based on reports, in many markets the number of transactions is small. Even if transactors do not attempt to manipulate, the limited number of transactions tends to inject some noise into the benchmark value. What’s more, benchmarks based on a small number of transactions can be influenced by a single trade or a small number of trades, thereby creating the potential for manipulation.

I refer to this as the bricks without straw problem. Just like the Jews in Egypt were confounded by Pharoh’s command to make bricks without straw, modern market participants are stymied in their attempts to create benchmarks without trades. This is a major problem in some big markets, notably Libor (where there are few interbank unsecured loans) and Brent (where large parcel sizes and declining Brent production mean that there are relatively few trades: Platts has attempted to address this problem by expanding the eligible cargoes to include Ekofisk, Oseberg, and Forties, and some baroque adjustments based on CFD and spread trades and monthly forward trades). This problem is not amenable to an easy fix.

Third, and perhaps even more depressingly, even transaction-based benchmarks derived from markets with a decent amount of trading activity are vulnerable to manipulation, and the incentive to manipulate is strong. Some changes can be made to mitigate these problems, but they can’t be eliminated through benchmark design alone. Some deterrence mechanism is necessary.

The precious metals fixes provide a good example of this. The silver and gold fixes have historically been based on transactions prices from an auction that Walras would recognize. But participation was limited, and some participants had the market power and the incentive to use it, and have evidently pushed prices to benefit related positions. For instance, in the recent allegation against Barclays, the bank could trade in sufficient volume to move the fix price sufficiently to benefit related positions in digital options. When there is a large enough amount of derivatives positions with payoffs tied to a benchmark, someone has the incentive to manipulate that benchmark, and many have the market power to carry out those manipulations.

The problems with the precious metals fixes have led to their redesign: a new silver fix method has been established and will go into effect next month, and the gold fix will be modified, probably along similar lines. The silver fix will replace the old telephone auction that operated via a few members trading on their own account and representing customer orders with a more transparent electronic auction operated by CME and Reuters. This will address some of the problems with the old fix. In particular, it will reduce the information advantage that the fixing dealers had that allowed them to trade profitably on other markets (e.g.,. gold futures and OTC forwards and options) based on the order flow information they could observe during the auction. Now everyone will be able to observe the auction via a screen, and will be less vulnerable to being picked off in other markets. It is unlikely, however, that the new mechanism will mitigate the market power problem. Big trades will move markets in the new auction, and firms with positions that have payoffs that depend on the auction price may have an incentive to make those big trades to advantage those positions.

Along these lines, it is important to note that many liquid and deep futures markets have been plagued by “bang the close” problems. For instance, Amaranth traded large volumes in the settlement period of expiring natural gas futures during three months of 2006 in order to move prices in ways that benefited its OTC swaps positions. The CFTC recently settled with the trading firm Optiver that allegedly banged the close in crude, gasoline, and heating oil in March, 2007. These are all liquid and deep markets, but are still vulnerable to “bullying” (as one Optiver trader characterized it) by large traders.

The incentives to cause an artificial price for any major benchmark will always exist, because one of the main purposes of benchmarks is to provide a mechanisms for determining cash flows for derivatives. The benchmark-derivatives market situation resembles an inverted pyramid, with large amounts cash flows from derivatives trades resting on a relatively small number of spot transactions used to set the benchmark value.

One way to try to ameliorate this problem is to expand the number of transactions at the point of the pyramid by expanding the window of time over which transactions are collected for the purpose of calculating the benchmark value: this has been suggested for the Platts Brent market, and for the FX fix. A couple of remarks. First, although this would tend to mitigate market power, it may not be sufficient to eliminate the problem: Amaranth manipulated a price that was based on a VWAP over a relatively long 30 minute interval. In contrast, in the Moore case (a manipulation case involving platinum and palladium brought by the CFTC) and Optiver, the windows were only two minutes long. Second, there are some disadvantages of widening the window. Some market participants prefer a benchmark that reflects a snapshot of the market at a point in time, rather than an average over a period of time. This is why Platts vociferously resists calls to extend the duration of its pricing window. There is a tradeoff in sources of noise. A short window is more affected by the larger sampling error inherent in the smaller number of transactions that occurs in a shorter interval, and the noise resulting from greater susceptibility to manipulation when a benchmark is based on smaller number of trades. However, an average taken over a time interval is a noisy estimate of the price at any point of time during that interval due to the random fluctuations in the “true” price driven by information flow. I’ve done some numerical experiments, and either the sampling error/manipulation noise has to be pretty large, or the volatility of the “true” price must be pretty low for it to be desirable to move to a longer interval.

Other suggestions include encouraging diversity in benchmarks. The other FSB-the Financial Stability Board-recommends this. Darrel Duffie and Jeremy Stein lay out the case here (which is a lot easier read than the 750+ pages of the longer FSB report).

Color me skeptical. Duffie and Stein recognize that the market has a tendency to concentrate on a single benchmark. It is easier to get into and out of positions in a contract which is similar to what everyone else is trading. This leads to what Duffie and Stein call “the agglomeration effect,” which I would refer to as a “tipping” effect: the market tends to tip to a single benchmark. This is what happened in Libor. Diversity is therefore unlikely in equilibrium, and the benchmark that survives is likely to be susceptible to either manipulation, or the bricks without straw problem.

Of course not all potential benchmarks are equally susceptible. So it would be good if market participants coordinated on the best of the possible alternatives. As Duffie and Stein note, there is no guarantee that this will be the case. This brings to mind the as yet unresolved debate over standard setting generally, in which some argue that the market’s choice of VHS over the allegedly superior Betamax technology, or the dominance of QWERTY over the purportedly better Dvorak keyboard (or Word vs. Word Perfect) demonstrate that the selection of a standard by a market process routinely results in a suboptimal outcome, but where others (notably Stan Lebowitz and Stephen Margolis) argue that  these stories of market failure are fairy tales that do not comport with the actual histories. So the relevance of the “bad standard (benchmark) market failure” is very much an open question.

Darrel and Jeremy suggest that a wise government can make things better:

This is where national policy makers come in. By speaking publicly about the advantages of reform — or, if necessary, by using their power to regulate — they can guide markets in the desired direction. In financial benchmarks as in tap water, markets might not reach the best solution on their own.

Putting aside whether government regulators are indeed so wise in their judgments, there is  the issue of how “better” is measured. Put differently: governments may desire a different direction than market participants.

Take one of the suggestions that Duffie and Stein raise as an alternative to Libor: short term Treasuries. It is almost certainly true that there is more straw in the Treasury markets than in any other rates market. Thus, a Treasury bill-based benchmark is likely to be less susceptible to manipulation than any other market. (Though not immune altogether, as the Pimco episode in June ’05 10 Year T-notes, the squeezes in the long bond in the mid-to-late-80s, the Salomon 2 year squeeze in 92, and the chronic specialness in some Treasury issues prove.)

But that’s not of much help if the non-manipulated benchmark is not representative of the rates that market participants want to hedge. Indeed, when swap markets started in the mid-80s, many contracts used Treasury rates to set the floating leg. But the basis between Treasury rates, and the rates at which banks borrowed and lent, was fairly variable. So a Treasury-based swap contract had more basis risk than Libor-based contracts. This is precisely why the market moved to Libor, and when the tipping process was done, Libor was the dominant benchmark not just for derivatives but floating rate loans, mortgages, etc.

Thus, there may be a trade-off between basis risk and susceptibility to manipulation (or to noise arising from sampling error due to a small number of transactions or averaging over a wide time window). Manipulation can lead to basis risk, but it can be smaller than the basis risk arising from a quality mismatch (e.g., a credit risk mismatch between default risk-free Treasury rates and a defaultable rate that private borrowers pay). I would wager that regulators would prefer a standard that is less subject to manipulation, even if it has more basis risk, because they don’t internalize the costs associated with basis risk. Market participants may have a very different opinion. Therefore, the “desired direction” may depend very much on whom you ask.

Putting all this together, I conclude we live in a fallen world. There is no benchmark Eden. Benchmark problems are likely to be chronic for the foreseeable future. And beyond. Some improvements are definitely possible, but benchmarks will always be subject to abuse. Their very source of utility-that they are a visible price that can be used to determine payoffs on vast sums of other contracts-always provides a temptation to manipulate.

Moving to transactions-based mechanisms eliminates outright lying as a manipulation strategy, but it does not eliminate the the potential for market power abuses. The benchmarks that would be least vulnerable to market power abuses are not necessarily the ones that best reflect the exposures that market participants face.

Thus, we cannot depend on benchmark design alone to address manipulation problems. The means, motive, and opportunity to manipulate even transactions-based benchmarks will endure. This means that reducing the frequency of manipulation requires some sort of deterrence mechanism, either through government action (as in the Libor, Optiver, Moore, and Amaranth cases) or private litigation (examples of which include all the aforementioned cases, plus some more, like Brent).  It will not be possible to “solve” the benchmark problems by designing better mechanisms, then riding off into the sunset like the Lone Ranger. Our work here will never be done, Kimo Sabe.*

* Stream of consciousness/biographical detail of the day. The phrase “Kimo Sabe” was immortalized by Jay Silverheels-Tonto in the original Lone Ranger TV series. My GGGGF, Abel Sherman, was slain and scalped by an Indian warrior named Silverheels during the Indian War in Ohio in 1794. Silverheels made the mistake of bragging about his feat to a group of lumbermen, who just happened to include Abel’s son. Silverheels was found dead on a trail in the woods the next day, shot through the heart. Abel (a Revolutionary War vet) was reputedly the last white man slain by Indians in Washington County, OH. His tombstone is on display in the Campus Martius museum in Marietta. The carving on the headstone is very un-PC. It reads:

Here lyes the body of Abel Sherman who fell by the hand of the Savage on the 15th of August 1794, and in the 50th year of  his age.

Here’s a picture of it:

OLYMPUS DIGITAL CAMERA

The stream by which Abel was killed is still known as Dead Run, or Dead Man’s Run.

Print Friendly

July 21, 2014

Doing Due Diligence in the Dark

Filed under: Exchanges,HFT,Regulation — The Professor @ 8:39 pm

Scott Patterson, WSJ reporter and the author of Dark Pools, has a piece in today’s journal about the Barclays LX story. He finds, lo and behold, that several users of the pool had determined that they were getting poor executions:

Trading firms and employees raised concerns about high-speed traders at Barclays PLC’s dark pool months before the New York attorney general alleged in June that the firm lied to clients about the extent of predatory trading activity on the electronic trading venue, according to people familiar with the firms.

Some big trading outfits noticed their orders weren’t getting the best treatment on the dark pool, said people familiar with the trading. The firms began to grow concerned that the poor results resulted from high-frequency trading, the people said.

In response, at least two firms—RBC Capital Markets and T. Rowe Price Group Inc —boosted the minimum number of shares they would trade on the dark pool, letting them dodge high-speed traders, who often trade in small chunks of 100 or 200 shares, the people said.

This relates directly to a point that I made in my post on the Barclays story. Trading is an experience good. Dark pool customers can evaluate the quality of their executions. If a pool is not screening out opportunistic traders, execution costs will be high relative to other venues who do a better job of screening, and users who monitor their execution costs will detect this. Regardless of what a dark pool operator says about what it is doing, the proof of the pudding is in the trading, as it were.

The Patterson article shows that at least some buy side firms do the necessary analysis, and can detect a pool that does not exclude toxic flows.

This long FT piece relies extensively on quotes from Hisander Misra, one of the founders of Chi-X, to argue that many fund managers have been ignorant of the quality of executions they get on dark pools. The article talked to two anonymous fund managers who say they don’t know how dark pools work.

The stated implication here is that regulation is needed to protect the buy side from unscrupulous pool operators.

A couple of comments. First, not knowing how a pool works doesn’t really matter. Measures of execution quality are what matter, and these can be measured. I don’t know all of the technical details of the operation of my car or the computer I am using, but I can evaluate their performances, and that’s what matters.

Second, this is really a cost-benefit issue. Monitoring of performance is costly. But so is regulation and litigation. Given that market participants have the biggest stake in measuring pool performance properly, and can develop more sophisticated metrics, there are strong arguments in favor of relying on monitoring.  Regulators can, perhaps, see whether a dark pool does what it advertises it will do, but this is often irrelevant because it does not necessarily correspond closely to pool execution costs, which is what really matters.

Interestingly, one of the things that got a major dark pool (Liquidnet) in trouble was that it shared information about the identities of existing clients with prospective clients. This presents interesting issues. Sharing such information could economize on monitoring costs. If a a big firm (like a T. Rowe) trades in a pool, this can signal to other potential users that the pool does a good job of screening out the opportunistic. This allows them to free ride off the monitoring efforts of the big firm, which economizes on monitoring costs.

Another illustration of how things are never simple and straightforward when analyzing market structure.

One last point. Some of the commentary I’ve read recently uses the prevalence of HFT volume in a dark pool as a proxy for how much opportunistic trading goes on in the pool. This is a very dangerous shortcut, because as I (and others) have written repeatedly, there is all different kinds of HFT. Some adds to liquidity, some consumes it, and some may be outright toxic/predatory. Market-making HFT can enhance dark pool liquidity, which is probably why dark pools encourage HFT participation. Indeed, it is hard to understand how a pool could benefit from encouraging the participation of predatory HFT, especially if it lets such firms trade for free. This drives away the paying customers, particularly when the paying customers evaluate the quality of their executions.

Evaluating execution quality and cost could be considered a form of institutional trader due diligence. Firms that do so can protect themselves-and their investor-clients-from opportunistic counterparties. Even though the executions are done in the dark, it is possible to shine a light on the results. The WSJ piece shows that many firms do just that. The question of whether additional regulation is needed boils down to the question of whether the cost and efficacy of these self-help efforts is superior to that of regulation.

Print Friendly

July 15, 2014

Oil Futures Trading In Troubled Waters

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,HFT,Regulation — The Professor @ 7:16 pm

A recent working paper by Pradeep Yadav, Michel Robe and Vikas Raman tackles a very interesting issue: do electronic market makers (EMMs, typically HFT firms) supply liquidity differently than locals on the floor during its heyday? The paper has attracted a good deal of attention, including this article in Bloomberg.

The most important finding is that EMMs in crude oil futures do tend to reduce liquidity supply during high volatility/stressed periods, whereas crude futures floor locals did not. They explain this by invoking an argument I did 20 years ago in my research comparing the liquidity of floor-based LIFFE to the electronic DTB: the anonymity of electronic markets makes market makers there more vulnerable to adverse selection. From this, the authors conclude that an obligation to supply liquidity may be desirable.

These empirical conclusions seem supported by the data, although as I describe below the scant description of the methodology and some reservations based on my knowledge of the data make me somewhat circumspect in my evaluation.

But my biggest problem with the paper is that it seems to miss the forest for the trees. The really interesting question is whether electronic markets are more liquid than floor markets, and whether the relative liquidity in electronic and floor markets varies between stressed and non-stressed markets. The paper provides some intriguing results that speak to that question, but then the authors ignore it altogether.

Specifically, Table 1 has data on spreads in from the electronic NYMEX crude oil market in 2011, and from the floor NYMEX crude oil market in 2006. The mean and median spreads in the electronic market: .01 percent. Given a roughly $100 price, this corresponds to one tick ($.01) in the crude oil market. The mean and median spreads in the floor market: .35 percent and .25 percent, respectively.

Think about that for a minute. Conservatively, spreads were 25 times higher in the floor market. Even adjusting for the fact that prices in 2011 were almost double than in 2006, we’re talking a 12-fold difference in absolute (rather than percentage) spreads. That is just huge.

So even if EMMs are more likely to run away during stressed market conditions, the electronic market wins hands down in the liquidity race on average. Hell, it’s not even a race. Indeed, the difference is so large I have a hard time believing it, which raises questions about the data and methodologies.

This raises another issue with the paper. The paper compares at the liquidity supply mechanism in electronic and floor markets. Specifically, it examines the behavior of market makers in the two different types of markets. What we are really interested is the outcome of these mechanisms. Therefore, given the rich data set, the authors should compare measures of liquidity in stressed and non-stressed periods, and make comparisons between the electronic and floor markets. What’s more, they should examine a variety of different liquidity measures. There are multiple measures of spreads, some of which specifically measure adverse selection costs. It would be very illuminating to see those measures across trading mechanisms and market environments. Moreover, depth and price impact are also relevant. Let’s see those comparisons too.

It is quite possible that the ratio of liquidity measures in good and bad times is worse in electronic trading than on the floor, but in any given environment, the electronic market is more liquid. That’s what we really want to know about, but the paper is utterly silent on this. I find that puzzling and rather aggravating, actually.

Insofar as the policy recommendation is concerned, as I’ve been writing since at least 2010, the fact that market makers withdraw supply during periods of market stress does not necessarily imply that imposing obligations to make markets even during stressed periods is efficiency enhancing. Such obligations force market makers to incur losses when the constraints bind. Since entry into market making is relatively free, and the market is likely to be competitive (the paper states that there are 52 active EMMS in the sample), raising costs in some state of the world, and reducing returns to market making in these states, will lead to the exit of market making capacity. This will reduce liquidity during unstressed periods, and could even lead to less liquidity supply in stressed periods: fewer firms offering more liquidity than they would otherwise choose due to an obligation may supply less liquidity in aggregate than a larger number of firms that can each reduce liquidity supply during stressed periods (because they are not obligated to supply a minimum amount of liquidity).

In other words, there is no free lunch. Even assuming that EMMs are more likely to reduce supply during stressed periods than locals, it does not follow that a market making obligation is desirable in electronic environments. The putatively higher cost of supplying liquidity in an electronic environment is a feature of that environment. Requiring EMMs to bear that cost means that they have to recoup it at other times. Higher cost is higher cost, and the piper must be paid. The finding of the paper may be necessary to justify a market maker obligation, but it is clearly not sufficient.

There are some other issues that the authors really need to address. The descriptions of the methodologies in the paper are far too scanty. I don’t believe that I could replicate their analysis based on the description in the paper. As an example, they say “Bid-Ask Spreads are calculated as in the prior literature.” Well, there are many papers, and many ways of calculating spreads. Hell, there are multiple measures of spreads. A more detailed statement of the actual calculation is required in order to know exactly what was done, and to replicate it or to explore alternatives.

Comparisons between electronic and open outcry markets are challenging because the nature of the data are very different. We can observe the order book at every instant of time in an electronic market. We can also sequence everything-quotes, cancellations and trades-with exactitude. (In futures markets, anyways. Due to the lack of clock synchronization across trading venues, this is a problem in a fragmented market like US equities.) These factors mean that it is possible to see whether EMMs take liquidity or supply it: since we can observe the quote, we know that if an EMM sells (buys) at the offer (bid) it is supplying liquidity, but if it buys (sells) at the offer (bid) it is consuming liquidity.

Things are not nearly so neat in floor trading data. I have worked quite a bit with exchange Street Books. They convey much less information than the order book and the record of executed trades in electronic markets like Globex. Street Books do not report the prevailing bids and offers, so I don’t see how it is possible to determine definitively whether a local is supplying or consuming liquidity in a particular trade. The mere fact that a local (CTI1) is trading with a customer (CTI4) does not mean the local is supplying liquidity: he could be hitting the bid/lifting the offer of a customer limit order, but since we can’t see order type, we don’t know. Moreover, even to the extent that there are some bids and offers in the time and sales record, they tend to be incomplete (especially during fast markets) and time sequencing is highly problematic. I just don’t see how it is possible to do an apples-to-apples comparison of liquidity supply (and particularly the passivity/aggressiveness of market makers) between floor and electronic markets just due to the differences in data. Nonetheless, the paper purports to do that. Another reason to see more detailed descriptions of methodology and data.

One red flag that indicates that the floor data may have some problems. The reported maximum bid-ask spread in the floor sample is 26.48 percent!!! 26.48 percent? Really? The 75th percentile spread is .47 percent. Given a $60 price, that’s almost 30 ticks. Color me skeptical. Another reason why a much more detailed description of methodologies is essential.

Another technical issue is endogeneity. Liquidity affects volatility, but the paper uses volatility as one of its measures of stressed markets in its study of how stress affects liquidity. This creates an endogeneity (circularity, if you will) problem. It would be preferable to use some instrument for stressed market conditions. Instruments are always hard to come up with, and I don’t have one off the top of my head, but Yanev et al should give some serious thought to identifying/creating such an instrument.

Moreover, the main claim of the paper is that EMMs’ liquidity supply is more sensitive to the toxicity of order flow than locals’ liquidity supply. The authors use order imbalance (CTI4 buys minus CTI4 sells, or the absolute value thereof more precisely), which is one measure of toxicity, but there are others. I would prefer a measure of customer (CTI4) alpha. Toxic (i.e., informed) order flow predicts future price movements, and hence when customer orders realize high alphas, it is likely that customers are more informed than usual and earn positive alphas. It would therefore be interesting to see the sensitivities of liquidity supply in the different trading environments to order flow toxicity as measured by CTI4 alphas.

I will note yet again that market maker actions to cut liquidity supply when adverse selection problems are severe is not necessarily a bad thing. Informed trading can be a form of rent seeking, and if EMMs are better able to detect informed trading and withdraw liquidity when informed trading is rampant, this form of rent seeking may be mitigated. Thus, greater sensitivity to toxicity could be a feature, not a bug.

All that said, I consider this paper a laudable effort that asks serious questions, and attempts to answer them in a rigorous way. The results are interesting and plausible, but the sketchy descriptions of the methodologies gives me reservations about these results. But by far the biggest issue is that of the forest and trees. What is really interesting is whether electronic markets are more or less liquid in different market environments than floor markets. Even if liquidity supply is flightier in electronic markets, they can still outperform floor based markets in both unstressed and stressed environments. The huge disparity in spreads reported in the paper suggests a vast difference in liquidity on average, which suggests a vast difference in liquidity in all different market environments, stressed and unstressed. What we really care about is liquidity outcomes, as measured by spreads, depth, price impact, etc. This is the really interesting issue, but one that the paper does not explore.

But that’s the beauty of academic research, right? Milking the same data for multiple papers. So I suggest that Pradeep, Michel and Vikas keep sitting on that milking stool and keep squeezing that . . . data ;-) Or provide the data to the rest of us out their and let us give it a tug.

Print Friendly

July 11, 2014

25 Years Ago Today Ferruzzi Created the Streetwise Professor

Filed under: Clearing,Commodities,Derivatives,Economics,Exchanges,HFT,History,Regulation — The Professor @ 9:03 am

Today is the 25th anniversary of the most important event in my professional life. On 11 July, 1989, the Chicago Board of Trade issued an Emergency Order requiring all firms with positions in July 1989 soybean futures in excess of the speculative limit to reduce those positions to the limit over five business days in a pro rata fashion (i.e., 20 percent per day, or faster). Only one firm was impacted by the order, Italian conglomerate Ferruzzi, SA.

Ferruzzi was in the midst of an attempt to corner the market, as it had done in May, 1989. The EO resulted in a sharp drop in soybean futures prices and a jump in the basis: for instance, by the time the contract went off the board on 20 July, the basis at NOLA had gone from zero to about 50 cents, by far the largest jump in that relationship in the historical record.

The EO set off a flurry of legal action. Ferruzzi tried to obtain an injunction against the CBT. Subsequently, farmers (some of whom had dumped truckloads of beans at the door of the CBT) sued the exchange. Moreover, a class action against Ferruzzi was also filed. These cases took years to wend their ways through the legal system. The farmer litigation (in the form of Sanner v. CBT) wasn’t decided (in favor of the CBT) until the fall of 2002. The case against Ferruzzi lasted somewhat less time, but still didn’t settle until 2006.

I was involved as an expert in both cases. Why?

Well, pretty much everything in my professional career post-1990 is connected to the Ferruzzi corner and CBT EO, in a knee-bone-connected-to-the-thigh-bone kind of way.

The CBT took a lot of heat for the EO. My senior colleague, the late Roger Kormendi, convinced the exchange to fund an independent analysis of its grain and oilseed markets to attempt to identify changes that could prevent a recurrence of the episode. Roger came into my office at Michigan, and told me about the funding. Knowing that I had worked in the futures markets before, asked me to participate in the study. I said that I had only worked in financial futures, but I could learn about commodities, so I signed on: it sounded interesting, my current research was at something of a standstill, and I am always up for learning something new. I ended up doing about 90 percent of the work and getting 20 percent of the money :-P but it was well worth it, because of the dividends it paid in the subsequent quarter century. (Putting it that way makes me feel old. But this all happened when I was a small child. Really!)

The report I (mainly) wrote for the CBT turned into a book, Grain Futures Contracts: An Economic Appraisal. (Available on Amazon! Cheap! Buy two! I see exactly $0.00 of your generous purchases.) Moreover, I saw the connection between manipulation and industrial organization economics (which was my specialization in grad school): market power is a key concept in both. So I wrote several papers on market power manipulation, which turned into a book . (Also available on Amazon! And on Kindle: for some strange reason, it was one of the first books published on Kindle.)

The issue of manipulation led me to try to understand how it could best be prevented or deterred. This led me to research self-regulation, because self-regulation was often advanced as the best way to tackle manipulation. This research (and the anthropological field work I did working on the CBT study) made me aware that exchange governance played a crucial role, and that exchange  governance was intimately related to the fact that exchanges are non-profit firms. So of course I had to understand why exchanges were non-profits (which seemed weird given that those who trade on them are about as profit-driven as you can get), and why they were governed in the byzantine, committee-dominated way they were. Moreover, many advocates of self-regulation argued that competition forced exchanges to adopt efficient rules. Observing that exchanges in fact tended to be monopolies, I decided I needed to understand the economics of competition between execution venues in exchange markets. This caused me to write my papers on market macrostructure, which is still an active area of investigation: I am writing a book on that subject. This in turn produced many of the conclusions that I have drawn about HFT, RegNMS, etc.

Moreover, given that I concluded that self-regulation was in fact a poor way to address manipulation (because I found exchanges had poor incentives to do so), I examined whether government regulation or private legal action could do better. This resulted in my work on the efficiency of ex post deterrence of manipulation. My conclusions about the efficiency of ex post deterrence rested on my findings that manipulated prices could be distinguished reliably from competitive prices. This required me to understand the determinants of competitive prices, which led to my research on the dynamics of storable commodity prices that culminated in my 2011 book. (Now available in paperback on Amazon! Kindle too.)

In other words, pretty much everything in my CV traces back to Ferruzzi. Even the clearing-related research, which also has roots in the 1987 Crash, is due to Ferruzzi: I wouldn’t have been researching any derivatives-related topics otherwise.

My consulting work, and in particular my expert witness work, stems from Ferruzzi. The lead counsel in the class action against Ferruzzi came across Grain Futures Contracts in the CBT bookstore (yes, they had such a thing back in the day), and thought that I could help him as an expert. After some hesitation (attorneys being very risk averse, and hence reluctant to hire someone without testimonial experience) he hired me. The testimony went well, and that was the launching pad for my expert work.

I also did work helping to redesign the corn and soybean contracts at the CBT, and the canola contract in Winnipeg: these redesigned contracts (based on shipping receipts) are the ones traded today. Again, this work traces its lineage to Ferruzzi.

Hell, this was even my introduction to the conspiratorial craziness that often swirls around commodity markets. Check out this wild piece, which links Ferruzzi (“the Pope’s soybean company”) to Marc Rich, the Bushes, Hillary Clinton, Vince Foster, and several federal judges. You cannot make up this stuff. Well, you can, I guess, as a quick read will soon convince you.

I have other, even stranger connections to Hillary and Vince Foster which in a more indirect way also traces its way back to Ferruzzi. But that’s a story for another day.

There’s even a Russian connection. One of Ferruzzi’s BS cover stories for amassing a huge position was that it needed the beans to supply big export sales to the USSR. These sales were in fact fictitious.

Ferruzzi was a rather outlandish company that eventually collapsed in 1994. Like many Italian companies, it was leveraged out the wazoo. Moreover, it had become enmeshed in the Italian corruption/mob investigations of the early 1990s, and its chairman Raul Gardini, committed suicide in the midst of the scandal.

The traders who carried out the corners were located in stylish Paris, but they were real commodity cowboys of the old school. Learning about that was educational too.

To put things in a nutshell. Some crazy Italians, and English and American traders who worked for them, get the credit-or the blame-for creating the Streetwise Professor. Without them, God only knows what the hell I would have done for the last 25 years. But because of them, I raced down the rabbit hole of commodity markets. And man, have I seen some strange and interesting things on that trip. Hopefully I will see some more, and if I do, I’ll share them with you right here.

Print Friendly

July 8, 2014

The Securities Market Structure Regulation Book Club

Filed under: Derivatives,Economics,Exchanges,Politics,Regulation — The Professor @ 4:30 pm

There was another hearing on HFT on Capitol Hill today, in the Senate. The best way to summarize it was that it reminded me of an evening at the local bookstore, with authors reading selections from their books.

Two examples suffice. Citadel’s Ken Griffin (whom I called out for talking his book on Frankendodd years ago) heavily criticized dark pools, and called for much heavier regulation of them. But he sang the praises of purchased order flow, and warned against any regulation of it.

So, go out on a limb and bet that (a) Citadel does not operate a dark pool, and (b) Citadel is one of the biggest purchasers of order flow, and you’ll be a winner!

The intellectually respectable case against dark pools and payment for order flow is the same. Both “cream skim” uninformed orders from the exchanges, leaving the exchange order flow more informed (i.e., more toxic), thereby reducing exchange liquidity by increasing adverse selection costs. I’m not saying that I agree with this case, but I do recognize that it is at least grounded in economics, and that an intellectually consistent critic of dark pools would also criticize purchased order flow.

But some people have books to sell.

The other example is Jeffrey Sprecher of ICE, which owns and operates the NYSE. Sprecher lamented the fragmentation of the equity markets, and praised the lack of fragmentation of futures markets. But he went further. He said that futures markets were competitive and not fragmented.

Tell me another one.

Yes, there is limited head-to-head competition in some futures contracts, such as WTI and Brent. But these are the exceptions, not the rule. Futures exchanges do not compete head to head in any other major contract. Execution in the equity market is far more competitive than in the futures market. Multiple equities exchanges compete vigorously, and the socialization of order flow due to RegNMS makes that competition possible. This is why the equities exchange business is low margin, and not very profitable. Futures exchanges own their order flow, and since liquidity attracts liquidity, one exchange tends to dominate trading in a particular instrument. So yes, futures markets are not fragmented, but no, they are not competitive. These things go together, regardless of what Sprecher says.  He wants to go back to the day when the NYSE was the dominant exchange and its members earned huge rents. That requires undoing a lot of what is in RegNMS.

Those were some of the gems from the witness side of the table. From the questioner side, we were treated to another display of Elizabeth Warren’s arrogant ignorance and idiocy. The scary thought is that the left views her as the next Obama who will deny Hillary and vault to the presidency. God save us.

Overall the hearing demonstrated what I’ve been saying for years. Market structure, and the regulations that drive market structure, have huge distributive effects. Everybody says that they are in favor of efficient markets, but I’m sure you’ll be shocked to learn that their definition of what is efficient happens to correspond with what benefits their firms. The nature of securities/derivatives trading creates rents. The battle over market structure is a classic rent seeking struggle. In rent seeking struggles, everybody reads out of their books. Everybody.

Print Friendly

July 1, 2014

What Gary Gensler, the Igor of Frankendodd, Hath Wrought

I’ve spent quite a bit of time in Europe lately, and this gives a rather interesting perspective on US derivatives regulatory policy. (I’m in London now for Camp Alphaville.)

Specifically, on the efforts of Frankdodd’s Igor, Gary Gensler, to make US regulation extraterritorial (read: imperialist).

Things came to a head when the head of the CFTC’s Clearing and Risk  division, Ananda K. Radhakrishnan, said that ICE and LCH, both of which clear US-traded futures contracts out of the UK, could avoid cross-border issues arising from inconsistencies between EU and US regulation (relating mainly to collateral segregation rules) by moving to the US:

Striking a marked contrast with European regulators calling for a collaborative cross-border approach to regulation, a senior CFTC official said he was “tired” of providing exemptions, referring in particular to discrepancies between the US Dodd-Frank framework and the European Market Infrastructure Regulation on clearing futures and the protection of related client collateral.

“To me, the first response cannot be: ‘CFTC, you’ve got to provide an exemption’,” said Ananda Radhakrishnan, the director of the clearing and risk division at the CFTC.

Radhakrishnan singled out LCH.Clearnet and the InterContinental Exchange as two firms affected by the inconsistent regulatory frameworks on listed derivatives as a result of clearing US business through European-based derivatives clearing organisations (DCOs).

“ICE and LCH have a choice. They both have clearing organisations in the United States. If they move the clearing of these futures contracts… back to a US only DCO I believe this conflict doesn’t exist,” said Radhakrishnan.

“These two entities can engage in some self-help. If they do that, neither [regulator] will have to provide an exemption.”

It was not just what he said, but how he said it. The “I’m tired” rhetoric, and his general mien, was quite grating to Europeans.

The issue is whether the US will accept EU clearing rules as equivalent, and whether the EU will reciprocate. Things are pressing, because there is a December deadline for the EU to recognize US CCPs as equivalent. If this doesn’t happen, European banks that use a US CCP (e.g., Barclays holding a Eurodollar futures position cleared through the CME) will face a substantially increased capital charge on the cleared positions.

Right now there is a huge game of chicken going on between the EU and the US. In response to what Europe views as US obduracy, the Europeans approved five Asian/Australasian CCPs as operating under rules equivalent to Europe’s, allowing European banks to clear though them without incurring the punitive capital charges. To emphasize the point, the EU’s head of financial services, Michael Barnier, said the US could get the same treatment if it deferred to EU rules (something which Radhakrishnan basically said he was tired of talking about):

“If the CFTC also gives effective equivalence to third country CCPs, deferring to strong and rigorous rules in jurisdictions such as the EU, we will be able to adopt equivalence decisions very soon,” Barnier said.

Read this as a giant one finger salute from the EU to the CFTC.

So we have a Mexican standoff, and the clock is ticking. If the EU and the US don’t resolve matters, the world derivatives markets will become even more fragmented. This will make them less competitive, which is cruelly ironic given that one of Gensler’s claims was that his regulatory agenda would make the markets more competitive. This was predictably wrong-and some predicted this unintended perverse outcome.

Another part of Gensler’s agenda was to extend US regulatory reach to entities operating overseas whose failure could threaten US financial institutions. One of his major criteria for identifying such entities was whether they are guaranteed by a US institution. Those who are so guaranteed are considered “US persons,” and hence subject to the entire panoply of Frankendodd requirements, including notably the SEF mandate. The SEF mandate is loathed by European corporates, so this would further fragment the swaps market. (And as I have said often before, since end users are the alleged beneficiaries of the SEF mandate-Gary oft’ told us so!-it is passing strange that they are hell-bent on escaping it.)

European US bank affiliates with guarantees from US parents have responded by terminating the guarantees. Problem solved, right? The dreaded guarantees that could spread contagion from Europe to the US are gone, after all.

But US regulators and legislators view this as a means of evading Frankendodd. Which illustrates the insanity of it all. The SEF mandate has nothing to do with systemic risk or contagion. Since the ostensible purpose of the DFA was to reduce systemic risk, it was totally unnecessary to include the SEF mandate. But in its wisdom, the US Congress did, and Igor pursued this mandate with relish.

The attempts to dictate the mode of trade execution even by entities that cannot directly spread contagion to the US via guarantees epitomizes the overreach of the US. Any coherent systemic risk rationale is totally absent. The mode of execution is of no systemic importance. The elimination of guarantees eliminates the ability of failing foreign affiliates to impact directly US financial institutions. If anything, the US should be happy, because some of the dread interconnections that Igor Gensler inveighed against have been severed.

But the only logic that matters her is that of control. And the US and the Europeans are fighting over control. The ultimate outcome will be a more fragmented, less competitive, and likely less robust financial system.

This is just one of the things that Gensler hath wrought. I could go on. And in the future I will.

Print Friendly

Next Page »

Powered by WordPress