Streetwise Professor

July 25, 2014

Benchmark Blues

Pricing benchmarks have been one of the casualties of the financial crisis. Not because the benchmarks-like Libor, Platts’ Brent window, ISDA Fix, the Reuters FX window or the gold fix-contributed in an material way to the crisis. Instead, the post-crisis scrutiny of the financial sector turned over a lot of rocks, and among the vermin crawling underneath were abuses of benchmarks.

Every major benchmark has fallen under deep suspicion, and has been the subject of regulatory action or class action lawsuits. Generalizations are difficult because every benchmark has its own problems. It is sort of like what Tolstoy said about unhappy families: every flawed benchmark is flawed in its own way. Some, like Libor, are vulnerable to abuse because they are constructed from the estimates/reports of interested parties. Others, like the precious metals fixes, are problematic due to a lack of transparency and limited participation. Declining production and large parcel sizes bedevil Brent.

But some basic conclusions can be drawn.

First-and this should have been apparent in the immediate aftermath of the natural gas price reporting scandals of the early-2000s-benchmarks based on the reports of self-interested parties, rather than actual transactions, are fundamentally flawed. In my energy derivatives class I tell the story of AEP, which the government discovered kept a file called “Bogus IFERC.xls” (IFERC being an abbreviation for Inside Ferc, the main price reporting publication for gas and electricity) that included thousands of fake transactions that the utility reported to Platts.

Second, and somewhat depressingly, although benchmarks based on actual transactions are preferable to those based on reports, in many markets the number of transactions is small. Even if transactors do not attempt to manipulate, the limited number of transactions tends to inject some noise into the benchmark value. What’s more, benchmarks based on a small number of transactions can be influenced by a single trade or a small number of trades, thereby creating the potential for manipulation.

I refer to this as the bricks without straw problem. Just like the Jews in Egypt were confounded by Pharoh’s command to make bricks without straw, modern market participants are stymied in their attempts to create benchmarks without trades. This is a major problem in some big markets, notably Libor (where there are few interbank unsecured loans) and Brent (where large parcel sizes and declining Brent production mean that there are relatively few trades: Platts has attempted to address this problem by expanding the eligible cargoes to include Ekofisk, Oseberg, and Forties, and some baroque adjustments based on CFD and spread trades and monthly forward trades). This problem is not amenable to an easy fix.

Third, and perhaps even more depressingly, even transaction-based benchmarks derived from markets with a decent amount of trading activity are vulnerable to manipulation, and the incentive to manipulate is strong. Some changes can be made to mitigate these problems, but they can’t be eliminated through benchmark design alone. Some deterrence mechanism is necessary.

The precious metals fixes provide a good example of this. The silver and gold fixes have historically been based on transactions prices from an auction that Walras would recognize. But participation was limited, and some participants had the market power and the incentive to use it, and have evidently pushed prices to benefit related positions. For instance, in the recent allegation against Barclays, the bank could trade in sufficient volume to move the fix price sufficiently to benefit related positions in digital options. When there is a large enough amount of derivatives positions with payoffs tied to a benchmark, someone has the incentive to manipulate that benchmark, and many have the market power to carry out those manipulations.

The problems with the precious metals fixes have led to their redesign: a new silver fix method has been established and will go into effect next month, and the gold fix will be modified, probably along similar lines. The silver fix will replace the old telephone auction that operated via a few members trading on their own account and representing customer orders with a more transparent electronic auction operated by CME and Reuters. This will address some of the problems with the old fix. In particular, it will reduce the information advantage that the fixing dealers had that allowed them to trade profitably on other markets (e.g.,. gold futures and OTC forwards and options) based on the order flow information they could observe during the auction. Now everyone will be able to observe the auction via a screen, and will be less vulnerable to being picked off in other markets. It is unlikely, however, that the new mechanism will mitigate the market power problem. Big trades will move markets in the new auction, and firms with positions that have payoffs that depend on the auction price may have an incentive to make those big trades to advantage those positions.

Along these lines, it is important to note that many liquid and deep futures markets have been plagued by “bang the close” problems. For instance, Amaranth traded large volumes in the settlement period of expiring natural gas futures during three months of 2006 in order to move prices in ways that benefited its OTC swaps positions. The CFTC recently settled with the trading firm Optiver that allegedly banged the close in crude, gasoline, and heating oil in March, 2007. These are all liquid and deep markets, but are still vulnerable to “bullying” (as one Optiver trader characterized it) by large traders.

The incentives to cause an artificial price for any major benchmark will always exist, because one of the main purposes of benchmarks is to provide a mechanisms for determining cash flows for derivatives. The benchmark-derivatives market situation resembles an inverted pyramid, with large amounts cash flows from derivatives trades resting on a relatively small number of spot transactions used to set the benchmark value.

One way to try to ameliorate this problem is to expand the number of transactions at the point of the pyramid by expanding the window of time over which transactions are collected for the purpose of calculating the benchmark value: this has been suggested for the Platts Brent market, and for the FX fix. A couple of remarks. First, although this would tend to mitigate market power, it may not be sufficient to eliminate the problem: Amaranth manipulated a price that was based on a VWAP over a relatively long 30 minute interval. In contrast, in the Moore case (a manipulation case involving platinum and palladium brought by the CFTC) and Optiver, the windows were only two minutes long. Second, there are some disadvantages of widening the window. Some market participants prefer a benchmark that reflects a snapshot of the market at a point in time, rather than an average over a period of time. This is why Platts vociferously resists calls to extend the duration of its pricing window. There is a tradeoff in sources of noise. A short window is more affected by the larger sampling error inherent in the smaller number of transactions that occurs in a shorter interval, and the noise resulting from greater susceptibility to manipulation when a benchmark is based on smaller number of trades. However, an average taken over a time interval is a noisy estimate of the price at any point of time during that interval due to the random fluctuations in the “true” price driven by information flow. I’ve done some numerical experiments, and either the sampling error/manipulation noise has to be pretty large, or the volatility of the “true” price must be pretty low for it to be desirable to move to a longer interval.

Other suggestions include encouraging diversity in benchmarks. The other FSB-the Financial Stability Board-recommends this. Darrel Duffie and Jeremy Stein lay out the case here (which is a lot easier read than the 750+ pages of the longer FSB report).

Color me skeptical. Duffie and Stein recognize that the market has a tendency to concentrate on a single benchmark. It is easier to get into and out of positions in a contract which is similar to what everyone else is trading. This leads to what Duffie and Stein call “the agglomeration effect,” which I would refer to as a “tipping” effect: the market tends to tip to a single benchmark. This is what happened in Libor. Diversity is therefore unlikely in equilibrium, and the benchmark that survives is likely to be susceptible to either manipulation, or the bricks without straw problem.

Of course not all potential benchmarks are equally susceptible. So it would be good if market participants coordinated on the best of the possible alternatives. As Duffie and Stein note, there is no guarantee that this will be the case. This brings to mind the as yet unresolved debate over standard setting generally, in which some argue that the market’s choice of VHS over the allegedly superior Betamax technology, or the dominance of QWERTY over the purportedly better Dvorak keyboard (or Word vs. Word Perfect) demonstrate that the selection of a standard by a market process routinely results in a suboptimal outcome, but where others (notably Stan Lebowitz and Stephen Margolis) argue that  these stories of market failure are fairy tales that do not comport with the actual histories. So the relevance of the “bad standard (benchmark) market failure” is very much an open question.

Darrel and Jeremy suggest that a wise government can make things better:

This is where national policy makers come in. By speaking publicly about the advantages of reform — or, if necessary, by using their power to regulate — they can guide markets in the desired direction. In financial benchmarks as in tap water, markets might not reach the best solution on their own.

Putting aside whether government regulators are indeed so wise in their judgments, there is  the issue of how “better” is measured. Put differently: governments may desire a different direction than market participants.

Take one of the suggestions that Duffie and Stein raise as an alternative to Libor: short term Treasuries. It is almost certainly true that there is more straw in the Treasury markets than in any other rates market. Thus, a Treasury bill-based benchmark is likely to be less susceptible to manipulation than any other market. (Though not immune altogether, as the Pimco episode in June ’05 10 Year T-notes, the squeezes in the long bond in the mid-to-late-80s, the Salomon 2 year squeeze in 92, and the chronic specialness in some Treasury issues prove.)

But that’s not of much help if the non-manipulated benchmark is not representative of the rates that market participants want to hedge. Indeed, when swap markets started in the mid-80s, many contracts used Treasury rates to set the floating leg. But the basis between Treasury rates, and the rates at which banks borrowed and lent, was fairly variable. So a Treasury-based swap contract had more basis risk than Libor-based contracts. This is precisely why the market moved to Libor, and when the tipping process was done, Libor was the dominant benchmark not just for derivatives but floating rate loans, mortgages, etc.

Thus, there may be a trade-off between basis risk and susceptibility to manipulation (or to noise arising from sampling error due to a small number of transactions or averaging over a wide time window). Manipulation can lead to basis risk, but it can be smaller than the basis risk arising from a quality mismatch (e.g., a credit risk mismatch between default risk-free Treasury rates and a defaultable rate that private borrowers pay). I would wager that regulators would prefer a standard that is less subject to manipulation, even if it has more basis risk, because they don’t internalize the costs associated with basis risk. Market participants may have a very different opinion. Therefore, the “desired direction” may depend very much on whom you ask.

Putting all this together, I conclude we live in a fallen world. There is no benchmark Eden. Benchmark problems are likely to be chronic for the foreseeable future. And beyond. Some improvements are definitely possible, but benchmarks will always be subject to abuse. Their very source of utility-that they are a visible price that can be used to determine payoffs on vast sums of other contracts-always provides a temptation to manipulate.

Moving to transactions-based mechanisms eliminates outright lying as a manipulation strategy, but it does not eliminate the the potential for market power abuses. The benchmarks that would be least vulnerable to market power abuses are not necessarily the ones that best reflect the exposures that market participants face.

Thus, we cannot depend on benchmark design alone to address manipulation problems. The means, motive, and opportunity to manipulate even transactions-based benchmarks will endure. This means that reducing the frequency of manipulation requires some sort of deterrence mechanism, either through government action (as in the Libor, Optiver, Moore, and Amaranth cases) or private litigation (examples of which include all the aforementioned cases, plus some more, like Brent).  It will not be possible to “solve” the benchmark problems by designing better mechanisms, then riding off into the sunset like the Lone Ranger. Our work here will never be done, Kimo Sabe.*

* Stream of consciousness/biographical detail of the day. The phrase “Kimo Sabe” was immortalized by Jay Silverheels-Tonto in the original Lone Ranger TV series. My GGGGF, Abel Sherman, was slain and scalped by an Indian warrior named Silverheels during the Indian War in Ohio in 1794. Silverheels made the mistake of bragging about his feat to a group of lumbermen, who just happened to include Abel’s son. Silverheels was found dead on a trail in the woods the next day, shot through the heart. Abel (a Revolutionary War vet) was reputedly the last white man slain by Indians in Washington County, OH. His tombstone is on display in the Campus Martius museum in Marietta. The carving on the headstone is very un-PC. It reads:

Here lyes the body of Abel Sherman who fell by the hand of the Savage on the 15th of August 1794, and in the 50th year of  his age.

Here’s a picture of it:

OLYMPUS DIGITAL CAMERA

The stream by which Abel was killed is still known as Dead Run, or Dead Man’s Run.

Print Friendly

July 1, 2014

What Gary Gensler, the Igor of Frankendodd, Hath Wrought

I’ve spent quite a bit of time in Europe lately, and this gives a rather interesting perspective on US derivatives regulatory policy. (I’m in London now for Camp Alphaville.)

Specifically, on the efforts of Frankdodd’s Igor, Gary Gensler, to make US regulation extraterritorial (read: imperialist).

Things came to a head when the head of the CFTC’s Clearing and Risk  division, Ananda K. Radhakrishnan, said that ICE and LCH, both of which clear US-traded futures contracts out of the UK, could avoid cross-border issues arising from inconsistencies between EU and US regulation (relating mainly to collateral segregation rules) by moving to the US:

Striking a marked contrast with European regulators calling for a collaborative cross-border approach to regulation, a senior CFTC official said he was “tired” of providing exemptions, referring in particular to discrepancies between the US Dodd-Frank framework and the European Market Infrastructure Regulation on clearing futures and the protection of related client collateral.

“To me, the first response cannot be: ‘CFTC, you’ve got to provide an exemption’,” said Ananda Radhakrishnan, the director of the clearing and risk division at the CFTC.

Radhakrishnan singled out LCH.Clearnet and the InterContinental Exchange as two firms affected by the inconsistent regulatory frameworks on listed derivatives as a result of clearing US business through European-based derivatives clearing organisations (DCOs).

“ICE and LCH have a choice. They both have clearing organisations in the United States. If they move the clearing of these futures contracts… back to a US only DCO I believe this conflict doesn’t exist,” said Radhakrishnan.

“These two entities can engage in some self-help. If they do that, neither [regulator] will have to provide an exemption.”

It was not just what he said, but how he said it. The “I’m tired” rhetoric, and his general mien, was quite grating to Europeans.

The issue is whether the US will accept EU clearing rules as equivalent, and whether the EU will reciprocate. Things are pressing, because there is a December deadline for the EU to recognize US CCPs as equivalent. If this doesn’t happen, European banks that use a US CCP (e.g., Barclays holding a Eurodollar futures position cleared through the CME) will face a substantially increased capital charge on the cleared positions.

Right now there is a huge game of chicken going on between the EU and the US. In response to what Europe views as US obduracy, the Europeans approved five Asian/Australasian CCPs as operating under rules equivalent to Europe’s, allowing European banks to clear though them without incurring the punitive capital charges. To emphasize the point, the EU’s head of financial services, Michael Barnier, said the US could get the same treatment if it deferred to EU rules (something which Radhakrishnan basically said he was tired of talking about):

“If the CFTC also gives effective equivalence to third country CCPs, deferring to strong and rigorous rules in jurisdictions such as the EU, we will be able to adopt equivalence decisions very soon,” Barnier said.

Read this as a giant one finger salute from the EU to the CFTC.

So we have a Mexican standoff, and the clock is ticking. If the EU and the US don’t resolve matters, the world derivatives markets will become even more fragmented. This will make them less competitive, which is cruelly ironic given that one of Gensler’s claims was that his regulatory agenda would make the markets more competitive. This was predictably wrong-and some predicted this unintended perverse outcome.

Another part of Gensler’s agenda was to extend US regulatory reach to entities operating overseas whose failure could threaten US financial institutions. One of his major criteria for identifying such entities was whether they are guaranteed by a US institution. Those who are so guaranteed are considered “US persons,” and hence subject to the entire panoply of Frankendodd requirements, including notably the SEF mandate. The SEF mandate is loathed by European corporates, so this would further fragment the swaps market. (And as I have said often before, since end users are the alleged beneficiaries of the SEF mandate-Gary oft’ told us so!-it is passing strange that they are hell-bent on escaping it.)

European US bank affiliates with guarantees from US parents have responded by terminating the guarantees. Problem solved, right? The dreaded guarantees that could spread contagion from Europe to the US are gone, after all.

But US regulators and legislators view this as a means of evading Frankendodd. Which illustrates the insanity of it all. The SEF mandate has nothing to do with systemic risk or contagion. Since the ostensible purpose of the DFA was to reduce systemic risk, it was totally unnecessary to include the SEF mandate. But in its wisdom, the US Congress did, and Igor pursued this mandate with relish.

The attempts to dictate the mode of trade execution even by entities that cannot directly spread contagion to the US via guarantees epitomizes the overreach of the US. Any coherent systemic risk rationale is totally absent. The mode of execution is of no systemic importance. The elimination of guarantees eliminates the ability of failing foreign affiliates to impact directly US financial institutions. If anything, the US should be happy, because some of the dread interconnections that Igor Gensler inveighed against have been severed.

But the only logic that matters her is that of control. And the US and the Europeans are fighting over control. The ultimate outcome will be a more fragmented, less competitive, and likely less robust financial system.

This is just one of the things that Gensler hath wrought. I could go on. And in the future I will.

Print Friendly

June 18, 2014

The Klearing Kool Aid Hangover

Back in Houston after a long trip to Turkey, France, Switzerland, and the Netherlands speaking about various commodity and clearing related issues, plus some R&R. Last stop on the tour was Chicago, where the Chicago Fed put on a great event on Law and Finance. Clearing was at the center of the discussion. Trying to be objective as possible, I think I can say that my critiques of clearing have had an influence on how scholars and practitioners (both groups being well-represented in Chicago) view clearing, and clearing mandates in particular. There is a deep  skepticism, and a growing awareness that CCPs are not the systemic risk safeguard that most had believed in the period surrounding the adoption of Frankendodd. Ruben Lee’s lunch talk summarized the skeptical view well, and recognized my role in making the skeptic’s case. His remarks were echoed by others at the workshop. If only this had penetrated the skulls of legislators and regulators when it could have made a major difference.

And the hits keep on coming. Since about April 2010 in particular, the focus of my criticism of clearing mandates has been on the destabilizing effects of rigid marking-to-market and variation margin by CCPs. I emphasized this in several SWP posts, and also my forthcoming article (in the Journal of Financial Market Infrastructure, a Risk publication) titled “A Bill of Goods.” So it was gratifying to read today that two scholars at the LSE, Ron Anderson and Karin Joeveer, used my analysis as the springboard for a more formal analysis of the issue.

The Anderson-Joeveer paper investigates collateral generally. It concludes that the liquidity implications of increased need for initial margin resulting from clearing mandates are not as concerning as the liquidity implications of greater variation margin flows that will result from a dramatic expansion of clearing.

Some of their conclusions are worth quoting in detail:

In addition, our analysis shows that moving toward central clearing with product specialized CCPs can greatly increase the numbers of margin movements which will place greater demands on a participant’s operational capacity and liquidity. This can be interpreted as tipping the balance of benefits and costs in favor of retaining bilateral OTC markets for a wider range of products and participants. Alternatively, assuming a full commitment to centralized clearing, it points out the importance of achieving consolidation and effective integration across infrastructures for a wider range of financial products. [Emphasis added.]

Furthermore:

A system relying principally on centralized clearing to mitigate counter-party risks creates increased demand for liquidity to service frequent margin calls. This can be met by opening up larger liquidity facilities, but indirectly this requires more collateral. To economize on the use of collateral, agents will try to limit liquidity usage, but this implies increased frequency of margin calls. This increases operational risks faced by CCPs which, given the concentration of risk in CCPs, raises the possibility that an idiosyncratic event could spill over into a system-wide event.

We have emphasized that collateral is only one of the tools used to control and manage credit risk. The notion that greater reliance on collateral will eliminate credit risk is illusory. Changing patterns in the use of collateral may not eliminate risk, but it will have implications for who will bear risks and on the costs of shifting risks. [Emphasis added.]

The G-20 stampede to impose clearing focused obsessively on counterparty credit risk, and ignored liquidity issues altogether. The effects of clearing on counterparty risk are vastly overstated (because the risk is mainly shifted, rather than reduced) and the liquidity effects have first-order systemic implications. Moving to a system which could increase margin flows by a factor of 10 (as estimated by Anderson-Joeveer), and which does so by increasing the tightness of the coupling of the system, is extremely worrisome. There will be large increases in the demand for liquidity in stressed market conditions that cause liquidity to dry up. Failures to get this liquidity in a timely fashion can cause the entire tightly-coupled system to break down.

As Ruben pointed out in his talk, the clearing stampede was based on superficial analysis and intended to achieve a political objective, namely, the desire to be seen as doing something. Pretty much everyone in DC and Brussels drank the Klearing Kool Aid, and now we are suffering the consequences.

Samuel Johnson said “Marry in haste, repent at leisure.” The same thing can be said of legislation and regulation.

Print Friendly

June 6, 2014

A Model Solution?

Filed under: Economics,Financial crisis,Regulation — The Professor @ 5:55 am

Europe and the US have diverged in significant ways on post-crisis financial regulation. This has had myriad consequences, including fragmentation of liquidity (especially in the swaps market), which in turn leads to less competition as it is harder for US banks to compete for the business of European clients and vice versa; greater cost; and greater complexity.

One area where there was some prospect of a unified global framework was capital rules for banks, where it was hoped that the US would adopt Basel III. But evidently the Fed is resisting that, and will move forward with its own stress test-based framework, rather than the Basel III framework which permits banks to utilize their own models to evaluate risk and hence capital.

Not that Basel III is perfect, by any means, but this approach creates the problems mentioned above, plus some more in the bargain. One is mentioned in the article: a one-model-fits all approach creates a monoculture problem that is vulnerable to catastrophic failure. I wrote about this problem quite a bit in 2009.

There is another problem as well. Although the stress test has some merits as a way of periodically evaluating capital adequacy, it  is not immediately clear how banks can evaluate the capital implications of particular transactions on a day-to-day basis in this system. If you can’t do that, you can’t price deals right or make capital allocation decisions. This is especially true if the details of the Fed model are kept secret, as will almost certainly be the case.

If you have a Basel III compliant capital model you can calculate the impact on capital due to an incremental change in a portfolio, and can hence price deals rationally and structure portfolios to achieve capital efficiency. This will be harder to do in a black-box stress test model. Not impossible but not easy either.

Print Friendly

May 19, 2014

Deja Dit: Clearing My Spindle on . . . Clearing

Filed under: Clearing,Financial crisis,Politics,Regulation — The Professor @ 7:47 pm

Several clearing related stories, each of which gives me a sense of deja vu. Or deja dit, to be more accurate.

The Bank of England just released a paper warning about the potential pro-cyclicality of CCP initial margin methodologies. I have expressed concern about this for some time.

BofE expresses concern that pro-cyclicality threatens to cause a measure intended to reduce credit risk create liquidity risk instead. This is another Clearing Cassandra theme. (Speaking of Cassandra, I will be returning to the old stomping grounds of Troy next week. And I don’t mean a city in upstate NY.)

BofE recommends that CCPs make public their margin methodologies, something that sends the clearinghouses into paroxysms of rage. But it makes sense to do that. And not just to reveal to the marketplace the potential liquidity demands that these methodologies can create, thereby allowing them to prepare accordingly. But to permit clearing participants to estimate their exposure to CCPs.

Clearing member exposure to CCPs depends on the likelihood that initial margins are sufficient to cover losses. Estimation of this exposure requires CMs to be able to evaluate margin calculations under a variety of market scenarios. If CCPs keep their methodologies secret, this is not possible. Discriminating choice among CCPs also requires market participants to understand margin costs and exposure under different scenarios. Such choice is not possible if CCPs keep secret their calculations.

CCPs are the beneficiaries of clearing mandates. Due to margin spirals and other feedback effects, margin calculations have external effects. There is therefore a strong efficiency case favoring disclosure to mitigate the externality, and any commercial/competitive inconvenience CCPs suffer as a result is more than compensated for by the fact that government mandates force huge quantities of business their way.

Another story that has come to my attention is that RBS is cutting back its rate clearing business, in large part due to the substantial capital commitment required, and the operational overhead.

This is another long-time SWP theme. The regulatory burdens of being a clearing member create scale economies that will result-and is resulting-in substantial consolidation of the clearing business. Thus, the systemic risks associated with clearing arise not only because of concentration of risk in CCPs, but in concentration of risks in a dwindling number of clearing firms who participate in multiple CCPs. Concentration of risks in a small number of CMs is, in my view, actually more systemically worrisome than concentration of risks in a small number of CCPs. Indeed, it is precisely the concentration of risks in CMs that makes failure of a systemically important CCP more likely.

Recall the good old days, when Gensler fought to reduce the minimum capital requirement for CMs to $25 million in order to spur competition in the supply of clearing services? Good times, good times. Little did he recognize that the other myriad burdens of Franendodd and Emir would inevitably lead to consolidation, making the minimum capital requirement irrelevant.

But this was only one of Gensler’s delusions (or was it lies?) about clearing. I was therefore pleased, and admittedly somewhat shocked, to see his (interim) replacement, Mark Wetjen, (implicitly) call bull on Gensler’s Panglossian propaganda on clearing:

He made an interesting and refreshingly blunt departure from the superseded Gensler script, by referring to Clearing Houses as potential sources of systemic risk.

“A clearinghouse’s failure to adhere to rigorous risk management practices established by the Commission’s regulations, now more than ever, could have significant economic consequences.”

His predecessor’s evangelical belief in CCPs as universal risk-mitigants, refused to countenance the heresy that central clearing may at best merely transfer credit risk, and may actually result in concentration of and increase in systemic risk. Fundamentalism should have no place in regulation, especially the more fundamental reforms; Wetjen’s implied recognition that a central pillar of the Dodd-Frank reforms is open for objective discussion, represents an important and consequential change in the Agency’s culture and governance.

That last part is the opinion of Nick Railton-Edwards (a somewhat Pythonesque handle, eh?), who wrote the piece, rather than Wetjen. (He sounds like a like-minded, not to say right-minded, bloke.) But it is a realistic characterization of the implications of Wetjen’s remarks. I would add that this evangelism is exactly what I hammered Gensler for repeatedly in 2009-2013: Indeed, I repeatedly used the term evangelist to refer to Gensler and his allies. (And that hammering is why he banned me from the CFTC building-something that I have on unimpeachable authority.)

The sad thing about all this is that all of these things were foreseeable before legislators and regulators* went all in on clearing as The Solution. Certain Cassandras did foresee it. But now this is where we are, and some adults like the BofE and Wetjen are trying to mitigate the dangers that this rash and thoughtless plunge created.

Would that this had occurred at the front end of the process rather than the back. Better late than never, perhaps. But better early than late.

*Timmah! was Gensler’s partner in crime on this. Geithner has just released his memoirs, and is flogging the book. I will give him another flogging in due course. For old times’ sake.

Print Friendly

March 29, 2014

Margin Sharing: Dealer Legerdermain, or, That’s Capital, Not Collateral.

Concerns about the burdens of posting margins on OTC derivatives, especially posting by clients who tend to have directional positions, have led banks to propose “margin sharing.”  This is actually something of a scam.  I can understand the belief that margin requirements resulting from Frankendodd and Emir are burdensome, and need to be palliated, but margin sharing is being touted in an intellectually dishonest way.

The basic idea is that under DFA and Emir, both parties have to post margin.  Let’s say A and B trade, and both have to post $50mm in initial margins.  The level of margins is chosen so that the “defaulter (or loser) pays”: that is, under almost all circumstances, the losses on a defaulted position will be less than $50mm, and the defaulter’s collateral is sufficient to cover the loss.  Since either party may default, each needs to post the $50mm margin to cover losses in the event it turns out to be the loser.

But the advocates of margin sharing say this is wasteful, because only one party will default.  So the $50mm posted by the firm that doesn’t end up defaulting is superfluous.  Instead, just have the parties post $25mm each, leaving $50mm in total, which according to the advocates of margin sharing, is what is needed to cover the cost of default.  Problem solved!

But notice the sleight of hand here.  Under the loser pays model, all the $50mm comes out of the defaulter’s margin: the defaulter pays,  the non-defaulter receives all that it is owed, and makes no contribution from its own funds.  Under the margin sharing model, the defaulter may pay only a fraction of the loss, and the non-defaulter may use some of its $25mm contribution to make up the difference.   Both defaulter and non-defaulter pay.

This is fundamentally different from the loser pays model.  In essence, the shared margin is a combination of collateral and capital.  Collateral is meant to cover a defaulter’s market losses.  Capital permits the non-defaulter to absorb a counterparty credit loss.  Margin sharing essentially results in the holding of segregated capital dedicated to a particular counterparty.

I am not a fan of defaulter pays.  Or to put it more exactly, I am not a fan of mandated defaulter pays.  But it is better to confront the problems with the defaulter pays model head on, rather than try to circumvent it with financial doubletalk.

Counterparty credit issues are all about the mix between defaulter pays and non-defaulter pays.  Between collateral and capital.  DFA and Emir mandate a corner solution: defaulter pays.  It is highly debatable (but lamentably under-debated) whether this corner solution is best.  But it is better to have an open discussion of this issue, with a detailed comparison of the costs and benefits of the alternatives.  The margin sharing proposal blurs the distinctions, and therefore obfuscates rather than clarifies.

Call a spade a spade. Argue that there is a better mix of collateral and capital.  Argue that segregated counterparty-specific capital is appropriate.  Or not: the counterparty-specific, segregated nature of the capital in margin sharing seems for all the world to be a backhanded, sneaky way to undermine defaulter pays and move away from the corner solution.  Maybe counterparty-specific, segregated capital isn’t best: but maybe just a requirement based on a  firm’s aggregate counterparty exposures, and which doesn’t silo capital for each counterparty, is better.

Even if the end mix of capital and collateral that would result from collateral sharing  is better than the mandated solution, such ends achieved by sneaky means lead to trouble down the road.  It opens the door for further sneaky, ad hoc, and hence poorly understood, adjustments to the system down the line.  This increases the potential for rent seeking, and for the abuse of regulator discretion, because there is less accountability when policies are changed by stealth.  (Obamacare, anyone?)  Moreover, a series of ad hoc fixes to individual problems tends to lead to an incoherent system that needs reform down the road-and which creates its own systemic risks.  (Again: Obamacare, anyone?)  Furthermore, the information produced in an honest debate is a public good that can improve future policy.

In other words, a rethink on capital vs. collateral is a capital idea.  Let’s have that rethink openly and honestly, rather than pretending that things like margin sharing are consistent with the laws and regulations that mandate margins, when in fact they are fundamentally different.

Print Friendly

March 11, 2014

CCP Insurance for Armageddon Time

Matt Leising has an interesting story in Bloomberg about a consortium of insurance companies that will offer an insurance policy to clearinghouses that will address one of the most troublesome issues CCPs face: what to do when the waterfall runs dry.  That is, who bears any remaining losses after the defaulters’ margins, defaulters’ default fund contributions, CCP capital, and non-defaulters’ default fund contributions (including any top-up obligation) are all exhausted.

Proposals include variation margin haircuts, and initial margin haircuts.  Variation margin haircuts would essentially reduce the amount that those owed money on defaulted contracts would receive, thereby mutualizing default losses among “winners.”  Initial margin haircuts would share the losses among both winners and losers.

Given that the “winners” include many hedgers who would have suffered losses on other positions, I’ve always found variation margin haircutting problematic: it would reduce payoffs precisely in those states of the world in which the marginal utility of those payoffs is particularly high.  But that has been the industry’s preferred approach to this problem, though it has definitely not been universally popular, to say the least.  Distributive battles are never popularity contests.

This is where the insurance concept steps in.  The insurers will cover up to $6 to $10 billion in losses (across multiple CCPs) once all other elements of the default waterfall-including non-defaulters’ default fund contributions and CCP equity-are exhausted.  This will sharply limit, and eliminate in all but the most horrific scenarios, the necessity of mutualizing losses among non-clearing members via variation or initial margin haircutting.

Of course this sounds great in concept.  But one thing not discussed in the article is price.  How expensive will the coverage be?  Will CCPs find it sufficiently affordable to buy, or will they decide to haircut margins in some way instead because that is cheaper?

As I say in Matt’s article, although this proposal addresses one big headache regarding CCPs in extremis, it does not address another major concern: the wrong way risk inherent in CCPs.  Losses are likely to hit the default fund in crisis scenarios, which is precisely when the CCP member firms (banks mainly) are least able to take the hit.

It would have been truly interesting if insurers would have been willing to share losses with CCP members.  That would have mitigated the wrong way risk problem.  But the insurers were evidently not willing to do that.   This is likely because they are concerned about the moral hazard problems.  Members would have less incentive to mitigate risk if some of that risk is offloaded onto insurers who don’t influence CCP risk management and margining the way member firms do.

In sum, the insurers are taking on the risk in the extreme tail.  This of course raises the question of whether they are able to bear such risk, as it is likely to crystalize precisely during Armageddon Time. The consortium attempts to allay those concerns by pointing out that they have no derivatives positions (translation: We are not AIG!!!)  But there is still reason to ponder whether these companies will be solvent during the wrenching conditions that will exist when potentially multiple CCPs blow through their entire waterfalls.

Right now this is just a proposal and only the bare outlines have been disclosed.  It will be fascinating to see whether the concept actually sells, or whether CCPs will figure it is cheaper to offload the risk in the extreme tail on their customers rather than on insurance companies in exchange for a premium.

I’m also curious: will Buffett participate.  He’s the tail risk provider of last resort, and his (hypocritical) anti-derivatives rhetoric aside, this seems like it’s right down his alley.

Print Friendly

March 4, 2014

Derivatives Priorities in Bankrutpcy: A Hobson’s Choice?

And now for something completely different . . . finance.  (More Russia/Ukraine later.)

The Bank of England wants to put a stay on derivatives contracts entered into by an insolvent bank, thereby negating some of the priorities in bankruptcy accorded to derivatives counterparties:

he U.K. central bank wants lenders and the International Swaps and Derivatives Association Inc., an industry group, to agree to temporarily halt claims on banks that become insolvent and need intervention, Andrew Gracie, executive director of the BOE’s special resolution unit, said in an interview.

“The entry of a bank into resolution should not in itself be an event of default which allows counterparties to start accelerating contracts and triggering cross-defaults,” Gracie said. “You would get what you saw in Lehmans — huge amounts of uncertainty and an uncontrolled cascade of closeouts and cross defaults in the market.”

The priority status of derivatives trades is problematic at best: although it increases the fraction of the claims that derivatives counterparties receive from a bankrupt bank, this effect is primarily redistributive.  Other creditors receive less.  On the plus side, in the absence of priorities, counterparties could be locked into contracts entered into as hedges that are of uncertain value and which may not pay off for some time.  This complicates the task of replacing the hedge entered into with the bankrupt bank.   On balance, given the redistributive nature of priorities, and the fact that some of those who lose due to the fact that derivatives are privileged may be systemically important or may run, there is something to be said for this change.

But the redistributive nature of priorities makes me skeptical that this will really have that much effect on whether a bank gets into trouble in the first place.  In particular, since runs and liquidity crises are what really threatens the stability of banks, the change of priorities likely will mainly just affect who has the incentive to run on a troubled institution, without affecting all that much the overall probability of a run.

Under the current set of priorities, derivatives counterparties have an incentive to stick longer with a troubled bank, because in the event it becomes insolvent they have a priority claim.  But this makes other claimants on a failing bank more anxious to run, because they know that if the bank does fail derivatives counterparties will get a lion’s share of the remaining assets.  By reducing the advantages that the derivatives couunterparties have, they are more likely to run and pull value from the failing firm, whereas other claimants are less likely to run than under the current regime.  (Duffie’s book on the failure of an OTC derivatives dealer shows how derivatives counterparties can effectively run.)

In other words, in terms of affecting the vulnerability of a bank to a destabilizing run, the choice of priorities is something of a Hobson’s choice.  It affects mainly who has an incentive to run, rather than the likelihood of a run over all.

The BoE’s initiative seems to be symptomatic of something I’ve criticized quite a bit over the past several years: the tendency to view derivatives in isolation.  Triggering of cross-defaults and accelerating contracts is a problem because they can hasten the collapse of a shaky bank.  So fix that, and banks become more stable, right? But maybe not because it changes the behavior and decisions of others who can also bring down a financial institution. This is why I am skeptical that these sorts of changes will affect the stability of banks much one way or the other.  They might affect where a fire breaks out, but not the likelihood of a fire overall.

Print Friendly

January 28, 2014

Were the Biggest Banks Playing Brer Rabbit on the Clearing Mandate, and Was Gensler Brer Fox?

Filed under: Clearing,Derivatives,Economics,Exchanges,Financial crisis,Politics,Regulation — The Professor @ 10:25 pm

One interesting part of the Cœuré speech was his warning that the clearing business was coming to be dominated by a few large banks, that are members of multiple CCPs:

Moreover, it appears that for many banks, indirect access is their preferred way to get access to clearing services so as to comply with the clearing obligation. Client clearing seems thus to be dominated by a few large global intermediaries. A factor contributing to this concentration may be higher compliance burdens, where only the very largest of firms are capable of taking on cross-border activity. This concentration creates a higher degree of dependency on this small group of firms.

There are also concerns about client access to this limited number of firms offering client clearing services. For example, there is some evidence of clearing firms “cherry picking” clients, while other end-users are commercially unattractive customers and hence unable to access centrally cleared markets.

These are all developments that I believe the international regulatory community may wish to carefully monitor and act on as and when needed.

And wouldn’t you know.  He supports a longstanding SWP theme: That Frankendodd and EMIR and Basel create a huge regulatory burden that is essentially a fixed cost.  This increase in fixed costs raises scale economies, and this inevitably leads to an increase in concentration-and arguably a reduction in competition, in the provision of clearing services.

It now seems rather quaint that there was a debate over whether CCPs should be required to lower the minimum capital threshold for membership to $50 million.  That’s not the barrier to entry/participation.  It’s the regulatory overhead.

It’s actually an old story.  I remember a Maloney and McCormick paper from the 80s-hell, maybe even the late 70s-about the effects of the regulation of particulates in textile factories (if I recall).  The cost of complying with the regulation was essentially fixed, and the law essentially favored big firms and they profited from it.  It raised the costs of their smaller rivals, led to their exit, and resulted in higher prices and the big firms profited.  Similarly, I recall that  several papers by the late Peter Pashigian (a member of my PhD committee) found that environmental regulations favored large firms.

The Cœuré speech suggests this may be happening here: note the part about client access to a “limited number of clearing firms.”

And it’s not just pipsqueaks that are exiting the clearing business.  The largest custodian bank-BNY Mellon-is closing up shop:

More banks are expected to follow BNY Mellon’s lead and pull out of client clearing, as flows have concentrated among half a dozen major players following the roll-out of mandatory clearing in the US last year.

The decision of the world’s largest custodian bank to shutter its US clearing unit was the first real indication of how much institutions are struggling with spiralling costs and complexity associated with clearing clients’ swaps trades – a business once viewed as the cash cow of the new regulatory regime.

You might recall that BNY Mellon was one of the firms that complained loudest about the high capital requirements of becoming a member of ICE Trust and LCH.  Again: it’s not the CCP capital requirements that are the issue.  It’s the other substantial cost of providing client clearing services, and regulatory/compliance costs are a big part of that.

Ah yes, another Gensler argument down in flames.  Remember how he constantly told us-lectured us, actually-that Frankendodd would dramatically increase competition in derivatives?  That it would break the dealer hammerlock on the OTC market?

Remember how I called bull?

Whose call looks better now?  Sometimes I wonder if JP Morgan, Goldman, Barclays, etc., weren’t playing the role of Brer Rabbit, and Gensler was playing Brer Fox. For he done trown dem into dat brer patch, sure ’nuff.

Though it must be said that this was not Gensler’s biggest contribution to reducing competition in derivatives markets in the name of increasing competition.  His insane extraterritoriality decisions have fragmented the OTC derivatives markets, with Europeans reluctant to trade with Americans.  The fragmentation of the markets reduces counterparty choice in both Europe and the US, thereby limiting competition.

This is not just a matter of competition.  There are systemic issues involved as well, and these also make a mockery of the Frankendodd evangelists.  They assured the world that Frankendodd and clearing mandates would reduce reliance on a few large, highly interconnected intermediaries in the derivatives markets. That is proving to be another lie, on the order of “if you like your health plan, you can keep your health plan.”  The old system relied on a baker’s dozen or so large, highly interconnected dealers.  The new system will rely on probably a handful or two large, highly interconnected clearing firms.

The most important elements in the clearing system are a small number of major banks that are clearing members at several global CCPs.  The failure or financial distress of any one of these would wreak havoc in the derivatives markets and the clearing mechanism, just as the failure of a major dealer firm would shake the bilateral OTC markets to the core.

Just think about one issue: portability.  If there are only a small number of huge clearing firms, is it really feasible to port the clients of one of them to the few remaining CMs, especially during times of market stress when these might not have the capital to take on a large number of new clients?

What happens then?

I don’t want to think about it: there’s only so much I can handle.

But Cœuré assures us the regulators are on top of it.  Or at least they are thinking about getting on top of it: “the international regulatory community may wish to carefully monitor and act on as and when needed.”  ”May wish to act as needed.”  Sure. Take your time! What’s the hurry? What’s the worry?

I won’t dwell on the  irony of those who advocated the measures that got us into this situation pulling their chins and telling us this might be a matter of concern, especially since they were deaf to warnings made back when they could have avoided leading us down the path that led us to this oh-so-predictable destination.

Print Friendly

January 26, 2014

Disconnected About Interconnections: Regulators Still Don’t Get the Systemic Risks in Central Clearing

A board member of the ECB, Benoît Cœuré, gave a speech that discussed “the new risks associated with central clearing.” It is evident that Cœuré is a proponent of central clearing, though it is annoying to see him identify multilateral netting as the main benefit (<holds head in hands>).  But it is good to see yet again that central bankers are aware that central clearing does create new risks, and that regulators must be proactive in addressing them.

The problem is that he overlooks the most important risks.  Reading between the lines, like most regulators, Cœuré focuses on the solvency risks of CCPs, and about policy tools that can limit the probability of CCP insolvency and mitigate the adverse impacts of such an insolvency.

But as I’ve written repeatedly in the past, it’s not the insolvency risk per se that should keep people up at night.  Indeed, the measures taken to address the solvency risk can actually exacerbate the real risk a dramatic expansion of central clearing creates for the financial system: liquidity risk.

Liquidity crises are what threaten to bring down financial systems.  For most financial institutions, there is a connection between liquidity risk and solvency: banks become illiquid because (in a world of imperfect information) people believe they might become insolvent.  Maturity mismatches plus imperfect information plus possibility of insolvency combine to create liquidity crises.

CCPs don’t have the maturity mismatches, and they aren’t leveraged.  They cannot experience liquidity crises in the same way banks can.  The direct liquidity risk of CCPs is related to their ability to turn collateral into cash in the event of a member default.

But as I’ve said over and over, clearing affects the needs for liquidity by market participants.  Central clearing can be a source of, or accelerant of, liquidity crises.  Big price moves lead to big margin calls lead to spikes in liquidity demand. These are most likely to occur during periods of financial stress, and can greatly exacerbate that stress.  Moreover, failure of a CCP is most likely to occur due to the inability of traders to fund margin calls due to the shortage of liquidity.   This old article by Andrew Brimmer discusses two episodes I’ve analyzed on several occasions-the Hunts in silver and Black Monday-and shows how it is liquidity/credit/funding of margin calls for CCPs that can create stresses in the financial system.

This is where the systemic risk of clearing arises.  But the subject is totally absent from Cœuré’s speech.  Which is worrisome.

There is also the fallacy of composition problem.  The measures that Cœuré advocates to make CCPs stronger do NOT necessarily make the system stronger.  Strengthening CCPs can actually exacerbate the liquidity problems that clearing causes during a crisis.  The CCP may survive, due to these measures, but the stresses communicated to the rest of the system (and the stress has to go somewhere) can cause other institutions to fail.

This is what scares the bejeezus out of me.  Regulators don’t seem to get the fallacy of composition, and aren’t focused on the liquidity implications of greatly expanded central clearing.

These fears are heightened by reading this DTCC report about collateral and collateral management.

It contains this heading that should make every central banker and financial regulator soil his armor:

Margin Call activity to increase By up to 1000%

Then there’s this:

Operational Capabilities and Settlement Exceptions Management: The potential ten-fold increase in margin call volumes, and the resulting complexity due to market changes, could overwhelm the current operational processes and system infra-structures within banks, buy-side firms and their administrators. As a result, firms will need to invest in technology and also reengineer the settlement, exceptions management and dispute resolution processes in place today. According to a 2011 De- loitte paper, investments in operations required to build and sustain advanced collateral capabilities is estimated at upwards of $50 million annually for top-tier banks.

Be afraid.  Be very, very, very afraid.

The dramatic increase in the scope of clearing substantially increases the operational complexity of the system.  More importantly, it increases the system’s operational rigidity, because cash has to flow quickly, and according to a very precise schedule.  From client to FCM to CCP to FCM to client.  Any failures in that chain can bring down the entire system.

I say again.  Systemic risk in financial systems is largely due to the fact that these systems are tightly coupled.  Clearing increases tight coupling.  This almost certainly increases systemic risk.

More players have to move more money in more jurisdictions as a result of clearing mandates.  As the DTCC report makes plain, this is a new responsibility for many of these players, and they do not have the capability or experience or systems.  Greater operational complexity involving more parties, many of whom are relatively inexperienced, creates grave risks in a tightly coupled financial system.

The irony of all this is that the evangelists of clearing, including notably Timmy! and GiGi in the US, argued that central clearing would reduce the interconnectedness of the financial markets.  Wrong. Wrong. Wrong. Wrong.

It reconfigures the interconnections.  The entire collateral management system the DTCC document describes is a dense web of interconnections.  And to reiterate: under central clearing (and the mandate to margin and mark-to-market uncleared derivatives) these connections (couplings) are tighter than in the old system.  Both old and new systems are highly interconnected.  The connections in the new system are tighter, and are more vulnerable to failure as a result.

I’ll tell you what makes me have to go change my armor: the regulators seem oblivious to this.  To the extent they are focused on collateral, they are focused on initial margin. No! It is variation margin calls during periods of large market movements that will threaten the stability of the system. Now there will be more such calls–1000 pct more, according to DTCC–and more participants are involved, meaning that there are more links and nodes.  The tightly coupled nature of the system means that the breakdown of a few links can bring down the entire thing.

In other words, there seems to be a disconnect on interconnections, most specifically on how clearing has not reduced interconnections but reshaped them, and how the new system’s interconnections are much more rigid, tightly coupled, and time-sensitive.

Not to pick on Cœuré: his speech is just one example of that disconnect.  The thing is that most speeches by regulators and central bankers exhibit the same disconnect.  Target fixation on making CCPs invulnerable does not address the main systemic risk that an expansion of clearing creates.  That systemic risk involves the financial/funding and operational risks of meeting large margin calls in a stressed environment on a precise time schedule.

It’s about liquidity, liquidity, liquidity.  Clearing transforms credit/solvency risk into liquidity risk.  The operational aspects of clearing-the need to move cash and collateral around in large amounts on a tight time schedule-affects the demand for liquidity, and also create points of failure that can cause the liquidity mechanism to seize up, threatening the entire system.

This is what should be the focus, but I’m seeing precious little evidence that it is.  Someday we’ll pay the price.

Print Friendly

Next Page »

Powered by WordPress