Streetwise Professor

October 17, 2017

Financial Regulators Are Finally Grasping the Titanic’s Captain’s Mistake. That’s Something, Anyways

Filed under: Clearing,Commodities,Derivatives,Economics,Financial crisis,Regulation — The Professor @ 7:11 pm

A couple of big clearing stories this week.

First, Gary Cohn, Director of the National Economic Council (and ex-Goldmanite–if there is such a thing as “ex”, sorta like the Cheka), proclaimed that CCPs pose a systemic risk, and the move to clearing post-crisis has been overdone: “Like every great modern invention, it has its limits, and I think we have expanded the limits of clearing probably farther beyond their useful existence.” Now, Cohn’s remarks are somewhat Trump-like in their clarity (or lack thereof), but they seem to focus on one type of liquidity issue: “we get less transparency, we get less liquid assets in the clearinghouse, it does start to resonate to me to be a new systemic problem in the system,” and “It’s the things we can’t liquidate that scare me.”

So one interpretation of Cohn’s statement is that he is worried that as CCPs expand, perforce they end up expanding what they accept as collateral. During a crisis in particular, these dodgier assets become very difficult to sell to cover the obligations of a defaulter, putting the CCP at risk of failure.

Another interpretation of “less liquid assets” and “things we can’t liquidate” is that these expressions refer to the instruments being cleared. A default that leaves a CCP with an unmatched book of illiquid derivatives in a stressed market will have a difficult task in restoring that book, and is at greater risk of failure.

These are both serious issues, and I’m glad to see them being aired (finally!) at the upper echelons of policymakers. Of course, these do not exhaust the sources of systemic risk in CCPs. We are nearing the 30th anniversary of the 1987 Crash, which revealed to me in a very vivid, experiential way the havoc that frequent variation margining can wreak when prices move a lot. This is the most important liquidity risk inherent in central clearing–and in the mandatory variation margining of uncleared derivatives.

So although Cohn did not address all the systemic risk issues raised by mandatory clearing, it’s past time that somebody important raised the subject in a very public and dramatic way.

Commenter Highgamma asked me whether this was from my lips to Cohn’s ear. Well, since I’ve been sounding the alarm for over nine years (with my first post-crisis post on the subject appearing 3 days after Lehman), all I can say is that sound travels very slowly in DC–or common sense does, anyways.

The other big clearing story is that the CFTC gave all three major clearinghouses passing grades on their just-completed liquidity stress tests: “All of the clearing houses demonstrated the ability to generate sufficient liquidity to fulfill settlement obligations on time.” This relates to the first interpretation of Cohn’s remarks, namely, that in the event that a CCP had to liquidate defaulters’ (plural) collateral in order to pay out daily settlements to this with gains, it would be able to do so.

I admit to being something of a stress test skeptic, especially when it comes to liquidity. Liquidity is a non-linear thing. There are a lot of dependencies that are hard to model. In a stress test, you look at some extreme scenarios, but those scenarios represent a small number of draws from a radically uncertain set of possibilities (some of which you probably can’t even imagine). The things that actually happen are usually way different than what you game out. And given the non-linearities and dependencies, I am skeptical that you can be confident in how liquidity will play out in the scenarios you choose.

Further, as I noted above, this problem is only one of the liquidity concerns raised by clearing, and not necessarily the the biggest one. But the fact that the CFTC is taking at least some liquidity issues seriously is a good thing.

The Gensler-era CFTC, and most of the US and European post-crisis financial regulators, imagined that the good ship CCP was unsinkable, and accordingly steered a reckless course heedless to any warning. You know, sort of like the captain of the Titanic did–and that is a recipe for disaster. Fortunately, now there is a growing recognition in policy-making circles that there are indeed financial icebergs out there that could sink clearinghouses–and take much of the financial system down with them. That is definitely an advance. There is still a long way to go, and methinks that policymakers are still to sanguine about CCPs, and still too blasé about the risks that lurk beneath the surface. But it’s something.

Print Friendly

October 12, 2017

Trump Treasury Channels SWP

SWP doesn’t work for the Trump Treasury Department, and is in fact neuralgic to the idea of working for any government agency. Yet the Treasury’s recent report on financial regulatory reform is very congenial to my thinking, on derivatives related issues anyways. (I haven’t delved into the other portions.)

A few of the greatest hits.

Position limits. The Report expresses skepticism about the existence of “excessive speculation.” Therefore, it recommends limiting the role of position limits to reducing manipulation during the delivery period. Along those lines, it recommends spot month on limits, because that is “where the risk of manipulation is greatest.” It also says that limits should be designed so as to not burden unduly hedgers. I made both of these points in my 2011 comment letter on position limits, and in the paper submitted in conjunction with ISDA’s comment letter in 2014. They are also reflected in the report on the deliberations of the Energy and Environmental Markets Advisory Committee that I penned (to accurately represent the consensus of the Committee) in 2016–much to Lizzie Warren’s chagrin.

The one problematic recommendation is that spot month position limits be based on “holistic” definitions of deliverable supply–e.g., the world gold market. This could have extremely mischievous effects in manipulation litigation: such expansive and economically illogical notions of deliverable supplies in CFTC decisions like Cox & Frey make it difficult to prosecute corners and squeezes.

CFTC-SEC Merger. I have ridiculed this idea for literally decades–starting when I was yet but a babe in arms 😉 It is a hardy perennial in DC, which I have called a solution in search of a problem. (I think I used the same language in regards to position limits–this is apparently a common thing in DC.) The Treasury thinks little of the idea either, and recommends against it.

SEFs. I called the SEF mandate “the worst of Frankendodd” immediately upon the passage of the law in July, 2010. The Treasury Report identifies many of the flaws I did, and recommends a much less restrictive requirement than GiGi imposed in the CFTC SEF rules. I also called out the Made Available For Trade rule the dumbest part of the worst of Frankendodd, and Treasury recommends eliminating these flaws as well. Finally, four years ago I blogged about the insanity of the dueling footnotes, and Treasury recommends “clarifying or eliminating” footnote 88, which threatened to greatly expand the scope of the SEF mandate.

CCPs. Although it does not address the main concern I have about the clearing mandate, Treasury does note that many issues regarding systemic risks relating to CCPs remain unresolved. I’ve been on about this since before DFA was passed, warning that the supposed solution to systemic risk originating in derivatives markets created its own risks.

Uncleared swap margin. I’ve written that uncleared swap margin rules were too rigid and posed risks. I have specifically written about the 10-day margining period rule as being too crude and poorly calibrated to risk: Treasury agrees. Similarly, it argues for easing affiliate margin rules, reducing the rigidity of the timing of margin payments (which will ease liquidity burdens), and overbroad application of the rule to include entities that do not impose systemic risks.

De minimis threshold for swap dealers. I’m on the record for saying using a notional amount to determine the de minimis threshold to determine who must register as a swap dealer made no sense, given the wide variation in riskiness of different swaps of the same notional value. I also am on the record that the $8 billion threshold sweeps in firms that do not pose systemic risks, and that a reduced threshold of $3 billion would be even more ridiculously over inclusive. Treasury largely agrees.

The impact of capital rules on clearing. One concern I’ve raised is that various capital rules, in particular those that include initial margin amounts in determining liquidity ratios for banks, and hence their capital requirements, make no economic sense, and and unnecessarily drive up the costs banks/FCMs incur to clear for clients. This is contrary to the purpose of clearing mandates, and moreover, has contributed to increased concentration among FCMs, which is in itself a systemic risk. Treasury recommends “the deduction of initial margin for centrally cleared derivatives from the SLR denominator.” Hear, hear.

I could go into more detail, but these are the biggies. All of these recommendations are very sensible, and with the one exception noted above, in the Title VII-related section I see no non-sensical recommendations. This is actually a very thoughtful piece of work that if followed, will  undo some of the most gratuitously burdensome parts of Frankendodd, and the Gensler CFTC’s embodiment (or attempts to embody) those parts in rules.

But, of course, on the Lizzie Warren left and in the chin pulling mainstream media, the report is viewed as a call to gut essential regulations. Gutting stupid is actually a good idea, and that’s what this report proposes. Alas, Lizzie et al are incapable of even conceiving that regulations could possibly be stupid.

Hamstrung by inane Russia investigations and a recalcitrant (and largely gutless and incompetent) Republican House and Senate, the Trump administration has accomplished basically zero on the legislative front. It’s only real achievement so far is to start–and just to start–the rationalization and in some cases termination (with extreme prejudice) of Obama-era regulation. If implemented, the recommendations in the Treasury Report (at least insofar as Title VII of DFA is concerned), would represent a real achievement. (As would rollbacks or elimination of the Clean Power Plan, Net Neutrality, and other 2009-2016 inanity.)

But of course this will require painstaking efforts by regulatory agencies, and will have to be accomplished in the face of an unrelentingly hostile media and the lawfare efforts of the regulatory class. But at least the administration has laid out a cogent plan of action, and is getting people in place who are dedicated to put that plan into action (e.g., Chris Giancarlo at CFTC). So let’s get on with it.

 

 

 

Print Friendly

July 6, 2017

SWP Acid Flashback, CCP Edition

Filed under: Clearing,Derivatives,Economics,Financial crisis,Regulation — The Professor @ 6:09 pm

Sometimes reading current news about clearing specifically and post-crisis regulation generally triggers acid flashbacks to old blog posts. Like this one (from 2010!):

[Gensler’s] latest gurgling appears on the oped page of today’s WSJ.  It starts with a non-sequitur, and careens downhill from there.  Gensler tells a story about his role in the LTCM situation, and then claims that to prevent a recurrence, or a repeat of AIG, it is necessary to reduce the “cancerous interconnections” (Jeremiah Recycled Bad Metaphor Alert!) in the financial system by, you guessed it, mandatory clearing.

Look.  This is very basic.  Do I have to repeat it?  CLEARING DOES NOT ELIMINATE INTERCONNECTIONS AMONG FINANCIAL INSTITUTIONS.  At most, it reconfigures the topology of the network of interconnections.  Anyone who argues otherwise is not competent to weigh in on the subject, let alone to have regulatory responsibility over a vastly expanded clearing system.  At most you can argue that the interconnections in a cleared system are better in some ways than the interconnections in the current OTC structure.  But Gensler doesn’t do that.   He just makes unsupported assertion after unsupported assertion.

Jeremiah’s latest gurgling appears on the oped page of today’s WSJ.  It starts with a non-sequitur, and careens downhill from there.  Gensler tells a story about his role in the LTCM situation, and then claims that to prevent a recurrence, or a repeat of AIG, it is necessary to reduce the “cancerous interconnections” (Jeremiah Recycled Bad Metaphor Alert!) in the financial system by, you guessed it, mandatory clearing. Look.  This is very basic.  Do I have to repeat it?  CLEARING DOES NOT ELIMINATE INTERCONNECTIONS AMONG FINANCIAL INSTITUTIONS.  At most, it reconfigures the topology of the network of interconnections.  Anyone who argues otherwise is not competent to weigh in on the subject, let alone to have regulatory responsibility over a vastly expanded clearing system.  At most you can argue that the interconnections in a cleared system are better in some ways than the interconnections in the current OTC structure.  But Gensler doesn’t do that.   He just makes unsupported assertion after unsupported assertion.

So what triggered this flashback? This recent FSB (no! not Putin!)/BIS/IOSCO report on . . . wait for it . . . interdependencies in clearing. As summarized by Reuters:

The Financial Stability Board, the Committee on Payments and Market Infrastructures, the International Organization of Securities Commissioners and the Basel Committee on Banking Supervision, also raised new concerns around the interdependency of CCPs, which have become crucial financial infrastructures as a result of post-crisis reforms that forced much of the US$483trn over-the-counter derivatives market into central clearing.

In a study of 26 CCPs across 15 jurisdictions, the committees found that many clearinghouses maintain relationships with the same financial entities.

Concentration is high with 88% of financial resources, including initial margin and default funds, sitting in just 10 CCPs. Of the 307 clearing members included in the analysis, the largest 20 accounted for 75% of financial resources provided to CCPs.

More than 80% of the CCPs surveyed were exposed to at least 10 global systemically important financial institutions, the study showed.

In an analysis of the contagion effect of clearing member defaults, the study found that more than half of surveyed CCPs would suffer a default of at least two clearing members as a result of two clearing member defaults at another CCP.

This suggests a high degree of interconnectedness among the central clearing system’s largest and most significant clearing members,” the committees said in their analysis.

To reiterate: as I said in 2010 (and the blog post echoed remarks that I made at ISDA’s General Meeting in San Fransisco shortly before I wrote the post), clearing just reconfigures the topology of the network. It does not eliminate “cancerous interconnections”. It merely re-jiggers the connections.

Look at some of the network charts in the FSB/BIS/IOSCO report. They are pretty much indistinguishable from the sccaaarrry charts of interdependencies in OTC derivatives that were bruited about to scare the chillin into supporting clearing and collateral mandates.

The concentration of clearing members is particularly concerning. The report does not mention it, but this concentration creates other major headaches, such as the difficulties of porting positions if a big clearing member (or two) defaults. And the difficulties this concentration would produce in trying to auction off or hedge the positions of the big clearing firms.

Further, the report understates the degree of interconnections, and in fact ignores some of the most dangerous ones. It looks only at direct connections, but the indirect connections are probably more . . . what’s the word I’m looking for? . . . cancerous–yeahthat’s it. CCPs are deeply embedded in the liquidity supply and credit network, which connects all major (and most minor) players in the market. Market shocks that cause big price changes in turn cause big variation margin calls that reverberate throughout the entire financial system. Given the tight coupling of the liquidity system generally, and the particularly tight coupling of the margining mechanism specifically, this form of interconnection–not considered in the report–is most laden with systemic ramifications. As I’ve said ad nauseum: the connections that are intended to prevent CCPs from failing are exactly the ones that pose the greatest threat to the entire system.

To flash back to another of my past writings: this recent report, when compared to what Gensler said in 2010 (and others, notably Timmy!, were singing from the same hymnal), shows that clearing and collateral mandates were a bill of goods. These mandates were sold on the basis of lies large and small. And the biggest lie–and I said so at the time–was that clearing would reduce the interconnectivity of the financial system. So the FSB/BIS/IOSCO have called bullshit on Gary Gensler. Unfortunately, seven years too late.

 

Print Friendly

July 1, 2017

All Flaws Great and Small, Frankendodd Edition

On Wednesday I had the privilege to deliver the keynote at the FOW Trading Chicago event. My theme was the fundamental flaws in Frankendodd–you’re shocked, I’m sure.

What I attempted to do was to categorize the errors. I identified four basic types.

Unintended consequences contrary to the objectives of DFA. This could also be called “counter-intended consequences”–not just unintended, but the precise opposite of the stated intent. The biggest example is, well, related to bigness. If you wanted to summarize a primary objective of DFA, it would be “to reduce the too big to fail problem.” Well, the very nature of DFA means that in some ways it exacerbates TBTF. Most notably, the resulting regulatory burdens actually favor scale, because they impose largely fixed costs. I didn’t mention this in my talk, but a related effect is that increasing regulation leads to greater influence activities by the regulated, and for a variety of reasons this tends to favor the big over the medium and small.

Perhaps the most telling example of the perverse effects of DFA is that it has dramatically increased concentration among FCMs. This exacerbates a variety of sources of systemic risk, including concentration risk at CCPs; difficulties in managing defaulted positions and porting the positions of the customers of troubled FCMs; and greater interconnections across CCPs. Concentration also fundamentally undermines the ability of CCPs to mutualize default risk. It can also create wrong-way risks as the big FCMs are in some cases also sources of liquidity support to CCPs.

I could go on.

Creation of new risks due to misdiagnoses of old risks. The most telling example here is the clearing and collateral mandates, which were predicated on the view that too much credit was extended via OTC derivatives transactions. Collateral and netting were expected to reduce this credit risk.

This is a category error. For one thing, it embodies a fallacy of composition: reducing credit in one piece of an interconnected financial system that possesses numerous ways to create credit exposures does not necessarily reduce credit risk in the system as a whole. For another, even to the extent that reducing credit extended via derivatives transactions reduces overall credit exposures in the financial system, it does so by creating another risk–liquidity risk. This risk is in my view more pernicious for many reasons. One reason is that it is inherently wrong-way in nature: the mandates increase demands for liquidity precisely during those periods in which liquidity supply typically contracts. Another is that it increases the tightness of coupling in the financial system. Tight coupling increases the risk of catastrophic failure, and makes the system more vulnerable to a variety of different disruptions (e.g., operational risks such as the temporary failure of a part of the payments system).

As the Clearing Cassandra I warned about this early and often, to little avail–and indeed, often to derision and scorn. Belatedly regulators are coming to an understanding of the importance of this issue. Fed governor Jerome Powell recently emphasized this issue in a speech, and recommended CCPs engage in liquidity stress testing. In a scathing report, the CFTC Inspector General criticized the agency’s cost-benefit analysis of its margin rules for non-cleared swaps, based largely on its failure to consider liquidity effects. (The IG report generously cited my work several times.

But these are at best palliatives. The fundamental problem is inherent in the super-sizing of clearing and margining, and that problem is here to stay.

Imposition of “solutions” to non-existent problems. The best examples of this are the SEF mandate and position limits. The mode of execution of OTC swaps was not a source of systemic risk, and was not problematic even for reasons unrelated to systemic risk. Mandating a change to the freely-chosen modes of transaction execution has imposed compliance costs, and has also resulted in a fragmented swaps market: those who can escape the mandate (e.g., European banks trading € swaps) have done so, leading to bifurcation of the market for € swaps, which (a) reduces competition (another counter-intended consequence), and (b) reduces liquidity (also counter-intended).

The non-existence of a problem that position limits could solve is best illustrated by the pathetically flimsy justification for the rule set out in the CFTC’s proposal: the main example the CFTC mentioned is the Hunt silver episode. As I said during my talk, this is ancient history: when do we get to the Trojan War? If anything, the Hunts are the exception that proves the rule. The CFTC also pointed to Amaranth, but (a) failed to show that Amaranth’s activities caused “unreasonable and unwarranted price fluctuations,” and (b) did not demonstrate that (unlike the Hunt case) that Amaranth’s financial distress posed any threat to the broader market or any systemic risk.

It is sickly amusing that the CFTC touts that based on historical data, the proposed limits would constrain few, if any market participants. In other words, an entire industry must bear the burden of complying with a rule that the CFTC itself says would seldom be binding. Makes total sense, and surely passes a rigorous cost-benefit test! Constraining positions is unlikely to affect materially the likelihood of “unreasonable and unwarranted price fluctuations”. Regardless, positions are not likely to be constrained. Meaning that the probability that the regulation reduces such price fluctuations is close to zero, if not exactly equal to zero. Yet there would be an onerous, and ongoing cost to compliance. Not to mention that when the regulation would in fact bind, it would potentially constrain efficient risk transfer.

The “comma and footnote” problem. Such a long and dense piece of legislation, and the long and detailed regulations that it has spawned, inevitably contain problems that can lead to protracted disputes, and/or unpleasant surprises. The comma I refer to is in the position limit language of the DFA itself: as noted in the court decision that stymied the original CFTC position limit rule, the placement of the comma affects whether the language in the statute requires the CFTC to impose limits, or merely gives it the discretionary authority to do so in the even that it makes an explicit finding that the limits are required to reduce unwarranted and unreasonable price fluctuations. The footnotes I am thinking of were in the SEF rule: footnote 88 dramatically increased the scope of the rule, while footnote 513 circumscribed it.

And new issues of this sort crop up regularly, almost 7 years after the passage of Dodd-Frank. Recently Risk highlighted the fact that in its proposal for capital requirements on swap dealers, the CFTC (inadvertently?) potentially made it far more costly for companies like BP and Shell to become swap dealers. Specifically, whereas the Fed defines a financial company as one in which more than 85 percent of its activities are financial in nature, the CFTC proposes that a company can take advantage of more favorable capital requirements if its financial activities are less than 15 percent of its overall activities. Meaning, for example, a company with 80 percent financial activity would not count as a financial company under Fed rules, but would under the proposed CFTC rule. This basically makes it impossible for predominately commodity companies like BP and Shell to take advantage of preferential capital treatment specifically included for them and their ilk in DFA. To the extent that these firms decide to incur costs (higher capital costs, or the cost of reorganizing their businesses to escape the rule’s bite) and become swap dealers nonetheless, that cost will not generate any benefit. To the extent that they decide that it is not worth the cost, the swaps market will be more concentrated and less competitive (more counter-intended effects).

The position limits proposed regs provide a further example of this devil-in-the-details problem. The idea of a hedging carveout is eminently sensible, but the specifics of the CFTC’s hedging exemptions were unduly restrictive.

I could probably add more categories to the list. Different taxonomies are possible. But I think the foregoing is a useful way of thinking about the fundamental flaws in Frankendodd.

I’ll close with something that could make you feel better–or worse! For all the flaws in Frankendodd, MiFID II and EMIR make it look like a model of legislative and regulatory wisdom. The Europeans have managed to make errors in all of these categories–only more of them, and more egregious ones. For instance, as bad as the the US position limit proposal is, it pales in comparison to the position limit regulations that the Europeans are poised to inflict on their firms and their markets.

 

Print Friendly

May 30, 2017

Clearing Fragmentation Follies: We’re From the European Commission, and We’re Here to Help You

Filed under: Clearing,Derivatives,Economics,Financial Crisis II,Politics,Regulation — The Professor @ 6:33 am

Earlier this month came news that the European Commission was preparing legislation that would require clearing of Euro derivatives to take place in the Eurozone, rather than in the UK, which presently dominates. This has been an obsession with the Euros since before Brexit: Brexit has only intensified the efforts, and provided a convenient rationalization for doing so.

The stated rationale is that the EU (and the ECB) need regulatory control over clearing of Euro-denominated derivatives because a problem at the CCP that clears them could have destabilizing effects on the Eurozone, and could necessitate the ECB providing liquidity support to the CCP in the event of trouble. If they are going to support it in extremis, they are going to need to have oversight, they claim.

Several things to note here. First, it is possible to have a regulatory line of sight without having jurisdiction. Note that the USD clearing business at LCH is substantially larger than the € clearing business there, yet the Fed, the Treasury, and Congress are fine with that, and are not insisting that all USD clearing be done stateside. They realize that there are other considerations (which I discuss more below): to simplify, they realize that London has become a dominant clearing center for good economic reasons, and that the economies of scale and scope clearing mean that concentration of clearing produces some efficiencies. Further, they realize that it is possible to have sufficient information to ensure that the foreign-domiciled CCP is acting prudently and not taking undue risks.

Canada is another example. A few years ago I wrote a white paper (under the aegis of the Canadian Market Infrastructure Committee) that argued that it would be efficient for Canada to permit clearing of C$ derivatives in London, rather than to require the establishment and use of a Canadian CCP. The Bank of Canada and the Canadian government agreed, and did not mandate the creation of a maple leaf CCP.

Second, if the Europeans think that by moving € clearing away from LCH that they will be immune from any problems there, they are sadly mistaken. The clearing firms that dominate in LCH will also be dominant in any Europe-domiciled € CCP, and a problem at LCH will be shared with the Euro CCP, either because the problem arises because of a problem at a firm that is a clearing member of both, or because an issue at LCH not originally arising from a CM problem will adversely affect all its CMs, and hence be communicated to other CCPs.  Consider, for example, the self-preserving way that LCH acted in the immediate aftermath of Brexit: this put liquidity demands on all its clearing members. With fragmented clearing, these strains would have been communicated to a Eurozone CCP.

When risks are independent, diversification and redundancy tend to reduce risk of catastrophic failure: when risks are not independent, they can either fail to reduce the risk substantially, or actually increase it. For instance, if the failure of CCP 1 likely causes the failure of CCP 2, having two CCPs actually increases the probability of a catastrophe (given a probability of CCP failure). CCP risks are not independent, but highly dependent. This means that fragmentation could well increase the problem of a clearing crisis, and is unlikely to reduce it.

This raises another issue: dealing with a crisis will be more complicated, the more fragmented is clearing. Two self-preserving CCPs have an incentive to take actions that may well hurt the other. Relatedly, managing the positions of a defaulted CM will be more complicated because this requires coordination across self-interested CCPs. Due to the breaking of netting sets, liquidity strains during a crisis are likely to be greater in a crisis with multiple CCPs (and here is where the self-preservation instincts of the two CCPs are likely to present the biggest problems).

Thus, (a) it is quite likely that fragmentation of clearing does not reduce, and may increase, the probability of a systemic shock involving CCPs, and (b) conditional on some systemic event, fragmented CCPs will respond less effectively than a single one.

The foregoing relates to how CCP fragmentation will affect markets during a systemic event. Fragmentation also affects the day-to-day economics of clearing. The breaking of netting sets resulting from the splitting off of € will increase collateral requirements. Perverse regulations, such as Basel III’s insistence on treating customer collateral as a CM asset against which capital must be held per the leverage requirement, will cause the collateral increase to increase substantially of providing clearing services.

Fragmentation will also result in costly duplication of activities, both across CCPs, and across CMs. For instance, it will entail duplicative oversight of CMs that clear both at LCH and the Eurozone CCP, and CMs that are members of both will have to staff separate interfaces with each. There will also be duplicative investments in IT (and the greater the number of IT potential points of failure, the greater the likelihood of at least one failure, which is almost certain to have deleterious consequences for CMs, and the other CCP). Fragmentation will also interfere with information flows, and make it likely that each CCP has less information than an integrated CCP would have.

This article raises another real concern: a Eurozone clearer is more likely to be subject to political pressure than the LCH. It notes that the Continentals were upset about the LCH raising haircuts on Eurozone sovereigns during the PIIGS crisis. In some future crisis (and there is likely to be one) the political pressure to avoid such moves will be intense, even in the face of a real deterioration of the creditworthiness of one or more EU states. Further upon a point made above, political pressures in the EU and the UK could exacerbate the self-preserving actions that could lead to a failure to achieve efficient cooperation in a crisis, and indeed, could lead to a catastrophic coordination failure.

In sum, it’s hard to find an upside to the forced repatriation of € clearing from LCH to some Eurozone entity. Both in wartime (i.e., a crisis) and in peacetime, there are strong economies of scale and scope in clearing. A forced breakup will sacrifice these economies. Indeed, since breaking up CCPs is unlikely to reduce the probability of a clearing-related crisis, but will make the crisis worse when it does occur, it is particularly perverse to dress this up as a way of protecting the stability of the financial system.

I also consider it sickly ironic that the Euros say, well, if we are expected to provide a liquidity backstop to a big financial entity, we need to have regulatory control. Um, just who was supplying all that dollar liquidity via swap lines to desperate European banks during the 2008-2009 crisis? Without the Fed, European banks would have failed to obtain the dollar funding they needed to survive. By the logic of the EC in demanding control of € clearing, the Fed should require that the US have regulatory authority over all banks borrowing and lending USD.

Can you imagine the squealing in Brussels and every European capital in response to any such demand?

Speaking of European capitals, there is another irony. One thing that may derail the EC’s clearing grab is a disagreement over who should have primary regulatory responsibility over a Eurozone CCP. The ECB and ESMA think the job should be theirs: Germany, France, and Italy say nope, this should be the job of national central banks  (e.g., the Bundesbank) or national financial regulators (e.g., Bafin).

So, hilariously, what may prevent (or at least delay) the fragmentation of clearing is a lack of political unity in the EU.  This is as good an illustration as any of the fundamental tensions within the EU. Everybody wants a superstate. As long as they are in control.

Ronald Reagan famously said that the nine scariest words in the English language are: “I’m from the government and I’m here to help.” I can top that: “I’m from the EC, and I’m here to help.” When it comes to demanding control of clearing, the EC’s “help” will be about as welcome as a hole in the head.

 

Print Friendly

April 4, 2017

The Unintended Consequences of Blockchain Are Not Unpredictable: Respond Now Rather Than Repent Later*

Filed under: Clearing,Commodities,Derivatives,Economics,Regulation — The Professor @ 3:39 pm

In the past week the WSJ and the FT have run articles about a new bank-led initiative to move commodity trading onto a blockchain. In many ways, this makes great sense. By its nature, the process of recording and trading commodity trades and shipments is (a) collectively involve large numbers of spatially dispersed counterparties, (b) have myriad terms, and (c) can give rise to costly disputes. As a result of these factors, the process is currently very labor intensive, fraught with operational risk (e.g., inadvertent errors) and vulnerable to fraud (cf., the Qingdao metals warehouse scandal of 2014). In theory, blockchain has the ability to reduce costs, errors, and fraud. Thus, it is understandable that traders and banks are quite keen on the potential of blockchain to reduce costs and perhaps even revolutionize the trading business.

But before you get too excited, a remark by my friend Christophe Salmon at Trafigura is latent with deep implications that should lead you to take pause and consider the likely consequences of widespread adoption of blockchain:

Christophe Salmon, Trafigura’s chief financial officer, said there would need to be widespread adoption by major oil traders and refiners to make blockchain in commodity trading viable in the long term.

This seemingly commonsense and innocuous remark is actually laden with implications of unintended consequences that should be recognized and considered now, before the blockchain train gets too far down the track.

In essence, Christophe’s remark means that to be viable blockchain has to scale. If it doesn’t scale, it won’t reduce cost. But if it does scale, a blockchain for a particular application is likely to be a natural monopoly, or at most a natural duopoly. (Issues of scope economies are also potentially relevant, but I’ll defer discussion of that for now.)

Indeed, if there are no technical impediments to scaling (which in itself is an open question–note the block size debate in Bitcoin), the “widespread adoption” feature that Christophe identifies as essential means that network effects create scale economies that are likely to result in the dominance of a single platform. Traders will want to record their business on the blockchain that their counterparties use. Since many trade with many, this creates a centripetal force that will tend to draw everyone to a single blockchain.

I can hear you say: “Well, if there is a public blockchain, that happens automatically because everyone has access to it.” But the nature of public blockchain means that it faces extreme obstacles that make it wildly impractical for commercial adoption on the scale being considered not just in commodity markets, but in virtually every aspect of the financial markets. Commercial blockchains will be centrally governed, limited access, private systems rather than a radically decentralized, open access, commons.

The “forking problem” alone is a difficulty. As demonstrated by Bitcoin in 2013 and Ethereum in 2016, public blockchains based on open source are vulnerable to “forking,” whereby uncoordinated changes in the software (inevitable in an open source system that lacks central governance and coordination) result in the simultaneous existence of multiple, parallel blockchains. Such forking would destroy the network economy/scale effects that make the idea of a single database attractive to commercial participants.

Prevention of forking requires central governance to coordinate changes in the code–something that offends the anarcho-libertarian spirits who view blockchain as a totally decentralized mechanism.

Other aspects of the pure version of an open, public blockchain make it inappropriate for most financial and commercial applications. For instance, public blockchain is touted because it does not require trust in the reputation of large entities such as clearing networks or exchanges. But the ability to operate without trust does not come for free.

Trust and reputation are indeed costly: as Becker and Stigler first noted decades ago, and others have formalized since, reputation is a bonding mechanism that requires the trusted entity to incur sunk costs that would be lost if it violates trust. (Alternatively, the trusted entity has to have market power–which is costly–that generates a stream of rents that is lost when trust is violated. That is, to secure trust prices have to be higher and output lower than would be necessary in a zero transactions cost world.)

But public blockchains have not been able to eliminate trust without cost. In Bitcoin, trust is replaced with “proof of work.” Well, work means cost. The blockchain mining industry consumes vast amounts of electricity and computing power in order to prove work. It is highly likely that the cost of creating trusted entities is lower than the cost of proof of work or alternative ways of eliminating the need for trust. Thus, a (natural monopoly) commercial blockchain is likely to have to be a trusted centralized institution, rather than a decentralized anarchist’s wet-dream.

Blockchain is also touted as permitting “smart contracts,” which automatically execute certain actions when certain pre-defined (and coded) contingencies are met. But “smart contracts” is not a synonym for “complete contracts,” i.e., contracts where every possible contingency is anticipated, and each party’s actions under each contingency is specified. Thus, even with smart (but incomplete) contracts, there will inevitably arise unanticipated contingencies.

Parties will have to negotiate what to do under these contingencies. Given that this will usually be a bilateral bargaining situation under asymmetric information, the bargaining will be costly and sometimes negotiations will break down. Moreover, under some contingencies the smart contracts will automatically execute actions that the parties do not expect and would like to change: here, self-execution prevents such contractual revisions, or at least makes them very difficult.

Indeed, it may be the execution of the contractual feature that first makes the parties aware that something has gone horribly wrong. Here another touted feature of pure blockchain–immutability–can become a problem. The revelation of information ex post may lead market participants to desire to change the terms of their contract. Can’t do that if the contracts are immutable.

Paper and ink contracts are inherently incomplete too, and this is why there are centralized mechanisms to address incompleteness. These include courts, but also, historically, bodies like stock or commodity exchanges, or merchants’ associations (in diamonds, for instance) have helped adjudicate disputes and to re-do deals that turn out to be inefficient ex post. The existence of institutions to facilitate the efficient adaption of parties to contractual incompleteness demonstrates that in the real world, man does not live (or transact) by contract alone.

Thus, the benefits of a mechanism for adjudicating and responding to contractual incompleteness create another reason for a centralized authority for blockchain, even–or especially–blockchains with smart contracts.

Further, the blockchain (especially with smart contracts) will be a complex interconnected system, in the technical sense of the term. There will be myriad possible interactions between individual transactions recorded on the system, and these interactions can lead to highly undesirable, and entirely unpredictable, outcomes. A centralized authority can greatly facilitate the response to such crises. (Indeed, years ago I posited this as one of the reasons for integration of exchanges and clearinghouses.)

And the connections are not only within a particular blockchain. There will be connections between blockchains, and between a blockchain and other parts of the financial system. Consider for example smart contracts that in a particular contingency dictate large cash flows (e.g., margin calls) from one group of participants to another. This will lead to a liquidity shock that will affect banks, funding markets, and liquidity supply mechanisms more broadly. Since the shock can be destabilizing and lead to actions that are individually rational but systemically destructive if uncoordinated, central coordination can improve efficiency and reduce the likelihood of a systemic crisis. That’s not possible with a radically decentralized blockchain.

I could go on, but you get the point: there are several compelling reasons for centralized governance of a commercial blockchain like that envisioned for commodity trading. Indeed, many of the features that attract blockchain devotees are bugs–and extremely nasty ones–in commercial applications, especially if adopted at large scale as is being contemplated. As one individual who works on commercializing blockchain told me: “Commercial applications of blockchain will strip out all of the features that the anarchists love about it.”

So step back for a minute. Christophe’s point about “widespread adoption” and an understanding of the network economies inherent in the financial and commercial applications of blockchain means that it is likely to be a natural monopoly in a particular application (e.g., physical oil trading) and likely across applications due to economies of scope (which plausibly exist because major market participants will transact in multiple segments, and because of the ability to use common coding across different applications, to name just two factors). Second, a totally decentralized, open access, public blockchain has numerous disadvantages in large-scale commercial applications: central governance creates value.

Therefore, commercial blockchains will be “permissioned” in the lingo of the business. That is, unlike public blockchain, entry will be limited to privileged members and their customers. Moreover, the privileged members will govern and control the centralized entity. It will be a private club, not a public commons. (And note that even the Bitcoin blockchain is not ungoverned. Everyone is equal, but the big miners–and there are now a relatively small number of big miners–are more equal than others. The Iron Law of Oligarchy applies in blockchain too.)

Now add another factor: the natural monopoly blockchain will likely not be contestible, for reasons very similar to the ones I have written about for years to demonstrate why futures and equity exchanges are typically natural monopolies that earn large rents because they are largely immune from competitive entry. Once a particular blockchain gets critical mass, there will be the lock-in problem from hell: a coordinated movement of a large set of users from the incumbent to a competitor will be necessary for the entrant to achieve the scale necessary to compete. This is difficult, if not impossible to arrange. Three Finger Brown could count the number of times that has happened in futures trading on his bad hand.

Now do you understand why banks are so keen on the blockchain? Yes, they couch it in terms of improving transactional efficiency, and it does that. But it also presents the opportunity to create monopoly financial market infrastructures that are immune from competitive entry. The past 50 years have seen an erosion of bank dominance–“disintermediation”–that has also eroded their rents. Blockchain gives the empire a chance to strike back. A coalition of banks (and note that most blockchain initiatives are driven by a bank-led cooperative, sometimes in partnership with a technology provider or providers) can form a blockchain for a particular application or applications, exploit the centripetal force arising from network effects, and gain a natural monopoly largely immune from competitive entry. Great work if you can get it. And believe me, the banks are trying. Very hard.

Left to develop on its own, therefore, the blockchain ecosystem will evolve to look like the exchange ecosystem of the 19th or early-20th centuries. Monopoly coalitions of intermediaries–“clubs” or “cartels”–offering transactional services, with member governance, and with the members reaping economic rents.

Right now regulators are focused on the technology, and (like many others) seem to be smitten with the potential of the technology to reduce certain costs and risks. They really need to look ahead and consider the market structure implications of that technology. Just as the natural monopoly nature of exchanges eventually led to intense disputes over the distribution of the benefits that they created, which in turn led to regulation (after bitter political battles), the fundamental economics of blockchain are likely to result in similar conflicts.

The law and regulation of blockchain is likely to be complicated and controversial precisely because natural monopoly regulation is inherently complicated and controversial. The yin and yang of financial infrastructure in particular is that the technology likely makes monopoly efficient, but also creates the potential for the exercise of market power (and, I might add, the exercise of political power to support and sustain market power, and to influence the distribution of rents that result from that market power). Better to think about those things now when things are still developing, than when the monopolies are developed, operating, and entrenched–and can influence the political and regulatory process, as monopolies are wont to do.

The digital economy is driven by network effects: think Google, Facebook, Amazon, and even Twitter. In addition to creating new efficiencies, these dominant platforms create serious challenges for competition, as scholars like Ariel Ezrachi and Maurice Stucke have shown:

Peter Thiel, the successful venture capitalist, famously noted that ‘Competition Is for Losers.’ That useful phrase captures the essence of many technology markets. Markets in which the winner of the competitive process is able to cement its position and protect it. Using data-driven network effects, it can undermine new entry attempts. Using deep pockets and the nowcasting radar, the dominant firm can purchase disruptive innovators.

Our new economy enables the winners to capture much more of the welfare. They are able to affect downstream competition as well as upstream providers. Often, they can do so with limited resistance from governmental agencies, as power in the online economy is not always easily captured using traditional competition analysis. Digital personal assistants, as we explore, have the potential to strengthen the winner’s gatekeeper power.

Blockchain will do the exact same thing.

You’ve been warned.

*My understanding of these issues has benefited greatly from many conversations over the past year with Izabella Kaminska, who saw through the hype well before pretty much anyone. Any errors herein are of course mine.

Print Friendly

March 27, 2017

Seeing the OTC Derivatives Markets (and the Financial Markets) Like a State

Filed under: Clearing,Derivatives,Economics,Regulation — The Professor @ 12:07 pm

In the years since the financial crisis, and in particular the period preceding and immediately following the passage of Frankendodd, I can’t tell you how many times I saw diagrams that looked like this:

YellenCCPDiagram_1

YellenCCPDiagram_2

The top diagram is a schematic representation of an OTC derivatives market, with a tangle of bilateral connections between counterparties. The second is a picture of a hub-and-spoke trading network with a CCP serving as the hub. (These particular versions of this comparison are from a 2013 Janet Yellen speech.)

These diagrams came to mind when re-reading James Scott’s Seeing Like a State and his Two Cheers for Anarchism. Scott argues that states have an obsession with making the societies they rule over “legible” in order to make them easier to tax, regulate, and control. States are confounded by evolved complexity and emergent orders: such systems are difficult to comprehend, and what cannot be comprehended cannot be easily ruled. So states attempt to impose schemes to simplify such complex orders. Examples that Scott gives include standardization of language and suppression of dialects; standardization of land tenure, measurements, and property rights; cadastral censuses; population censuses; the imposition of familial names; and urban renewal (e.g., Hausmann’s/Napoleon III’s massive reconstruction of Paris). These things make a populace easier to tax, conscript, and control.

Complex realities of emergent orders are too difficult to map. So states conceive of a mental map that is legible to them, and then impose rules on society to force it to conform with this mental map.

Looking back at the debate over OTC markets generally, and clearing, centralized execution, and trade reporting in particular, it is clear that legislators and regulators (including central banks) found these markets to be illegible. Figures like the first one–which are themselves a greatly simplified representation of OTC reality–were bewildering and disturbing to them. The second figure was much more comprehensible, and much more comforting: not just because they could comprehend it better, but because it gave them the sense that they could impose an order that would be easier to monitor and control. The emergent order was frightening in its wildness: the sense of imposing order and control was deeply comforting.

But as Scott notes, attempts to impose control on emergent orders (which in Scott’s books include both social and natural orders, e.g., forests) themselves carry great risks because although hard to comprehend, these orders evolved the way they did for a reason, and the parts interact in poorly understood–and sometimes completely not understood–ways. Attempts to make reality fit a simple mental map can cause the system to react in unpredicted and unpredictable ways, many of which are perverse.

My criticism of the attempts to “reform” OTC markets was largely predicated on my view that the regulators’ simple mental maps did great violence to complex reality. Even though these “reform” efforts were framed as ways of reducing systemic risk, they were fatally flawed because they were profoundly unsystemic in their understanding of the financial system. My critique focused specifically on the confident assertions based on the diagrams presented above. By focusing only on the OTC derivatives market, and ignoring the myriad connections of this market to other parts of the financial market, regulators could not have possibly comprehended the systemic implications of what they were doing. Indeed, even the portrayal of the OTC market alone was comically simplistic. The fallacy of composition played a role here too: the regulators thought they could reform the system piece-by-piece, without thinking seriously about how these pieces interacted in non-linear ways.

The regulators were guilty of the hubris illustrated beautifully by the parable of Chesterton’s Fence:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

In other words, the regulators should have understood the system and why it evolved the way that it did before leaping in to “reform” it. As Chesterton says, such attempts at reformation quite frequently result in deformation.

Somewhat belatedly, there are efforts underway to map the financial system more accurately. The work of Richard Bookstaber and various colleagues under the auspices of the Office of Financial Research to create multilayer maps of the financial system is certainly a vast improvement on the childish stick figure depictions of Janet Yellen, Gary Gensler, Timmy Geithner, Chris Dodd, Barney Frank et al. But even these more sophisticated maps are extreme abstractions, not least because they cannot capture incentives, the distribution of information among myriad market participants, and the motivations and behaviors of these participants. Think of embedding these maps in the most complicated extensive form large-N player game you can imagine, and you might have some inkling of how inadequate any schematic representation of the financial system is likely to be. When you combine this with the fact that in complex systems, even slight changes in initial conditions can result in completely different outcomes, the futility of “seeing like a state” in this context becomes apparent. The map of initial conditions is inevitably crude, making it an unreliable guide to understanding the system’s future behavior.

In my view, Scott goes too far. There is no doubt that some state-driven standardization has dramatically reduced transactions costs and opened up new possibilities for wealth-enhancing exchanges (at some cost, yes, but these costs are almost certainly less than the benefit), but Scott looks askance at virtually all such interventions. Thus, I do not exclude the possibility of true reform. But Scott’s warning about the dangers of forcing complex emergent orders to conform to simplified, “legible”, mental constructs must be taken seriously, and should inform any attempt to intervene in something like the financial system. Alas, this did not happen when legislators and regulators embarked on their crusade to reorganize wholesale the world financial system. It is frightening indeed to contemplate that this crusade was guided by such crude mental maps such as those supposedly illustrating the virtues of moving from an emergent bilateral OTC market to a tamed hub-and-spoke cleared one.

PS. I was very disappointed by this presentation by James Scott. He comes off as a doctrinaire leftist anthropologist (but I repeat myself), which is definitely not the case in his books. Indeed, the juxtaposition of Chesterton and Scott shows how deeply conservative Scott is (in the literal sense of the word).

Print Friendly

February 11, 2017

Risk Gosplan Works Its Magic in Swaps Clearing

Filed under: Clearing,Commodities,Derivatives,Economics,Politics,Regulation — The Professor @ 4:18 pm

Deutsche Bank quite considerately provided a real time example of an unintended consequence of Frankendodd, specifically, capital requirements causing firms to exit from clearing. The bank announced it is continuing to provide futures clearing, but is exiting US swaps clearing, due to capital cost concerns.

Deutsch was not specific in citing the treatment of margins under the leverage ratio as the reason for its exit, this is the most likely culprit. Recall that even segregated margins (which a bank has no access to) are treated as bank assets under the leverage rule, so a swaps clearer must hold capital against assets over which it has no control (because all swap margins are segregated), cannot utilize to fund its own activities, and which are not funded by a liability issued by the clearer.

It’s perverse, and is emblematic of the mixed signals in Frankendodd: CLEAR SWAPS! CLEARING SWAPS  IS EXTREMELY CAPITAL INTENSIVE SO YOU WON’T MAKE ANY MONEY DOING IT! Yeah. That will work out swell.

Of course Deutsch Bank has its own issues, and because of those issues it faces more acute capital concerns than other institutions (especially American ones). But here is a case where the capital cost does not at all match up with risk (and remember that capital is intended to be a risk absorber). So looking for ways to economize on capital, Deutsch exited a business where the capital charge did not generate any commensurate return, and furthermore was unrelated to the actual risk of the business. If the pricing of risk had been more sensible, Deutsch might have scaled back other businesses where capital charges reflected risk more accurately. Here, the effect of the leverage ratio is all pain, no gain.

When interviewed by Risk Magazine about the Fundamental Review of the Trading Book, I said: “The FRTB’s standardised approach is basically central planning of risk pricing, and it will produce Gosplan-like results.” The leverage ratio, especially as applied to swaps margins, is another example of central planning of risk pricing, and here indeed it has produced Gosplan-like results.

And in the case of clearing, these results are exactly contrary to a crucial ostensible purpose of DFA: reducing size and concentration in banking generally, and in derivatives markets in particular. For as the FT notes:

The bank’s exit will reignite concerns that the swaps clearing business is too concentrated among a handful of large players. The top three swaps clearers account for more than half the market by client collateral required, while the top five account for over 75 per cent.

So swaps clearing is now hyper-concentrated, and dominated by a handful of systemically important banks (e.g., Citi, Goldman). It is more concentrated that the bilateral swaps dealer market was. Trouble at one of these dominant swaps clearers would create serious risks for CCPs that they clear for (which, by the way, are all interconnected because the same clearing members dominate all the major CCPs). Moreover, concentration dramatically reduces the benefits of mutualizing risk: because of the small number of clearers, the risk of a big CM failure will be borne by a small number of firms. This isn’t insurance in any meaningful way, and does not achieve the benefits of risk pooling even if only in the first instance only a single big clearing member runs into trouble due to a shock idiosyncratic to it.

At present, there is much gnashing of teeth and rending of garments at the prospect of even tweaks in Dodd-Frank. Evidently, the clearing mandate is not even on the table. But this one vignette demonstrates that Frankendodd and banking regulation generally is shot through with provisions intended to reduce systemic risk which do not have that effect, and indeed, likely have the perverse effect of creating some systemic risks. Viewing Dodd-Frank as a sacred cow and any proposed change to it as a threat to the financial system is utterly wrongheaded, and will lead to bad outcomes.

Barney and Chris did not come down Mount Sinai with tablets containing commandments written by the finger of God. They sat on Capitol Hill and churned out hundreds of pages of laws based on a cartoonish understanding of the financial system, information provided by highly interested parties, and a frequently false narrative of the financial crisis. These laws, in turn, have spawned thousands of pages of regulation, good, bad, and very ugly. What is happening in swaps clearing is very ugly indeed, and provides a great example of how major portions of Dodd-Frank and the regulations emanating from it need a thorough review and in some cases a major overhaul.

And if Elizabeth Warren loses her water over this: (a) so what else is new? and (b) good! Her Manichean view of financial regulation is a major impediment to getting the regulation right. What is happening in swaps clearing is a perfect illustration of why a major midcourse correction in the trajectory of financial regulation is imperative.

Print Friendly

February 4, 2017

The Regulatory Road to Hell

One of the most encouraging aspects of the new administration is its apparent commitment to rollback a good deal of regulation. Pretty much the entire gamut of regulation is under examination, and even Trump’s nominee for the Supreme Court, Neil Gorsuch, represents a threat to the administrative state due to his criticism of Chevron Deference (under which federal courts are loath to question the substance of regulations issued by US agencies).

The coverage of the impending regulatory rollback is less that informative, however. Virtually every story about a regulation under threat frames the issue around the regulation’s intent. The Fiduciary Rule “requires financial advisers to act in the best interests of their clients.” The Stream Protection Rule prevents companies from “dumping mining waste into streams and waterways.” The SEC rule on reporting of payments to foreign governments by energy and minerals firms “aim[s] to address the ‘resource curse,’ in which oil and mineral wealth in resource-rich countries flows to government officials and the upper classes, rather than to low-income people.” Dodd-Frank is intended prevent another financial crisis. And on and on.

Who could be against any of these things, right? This sort of framing therefore makes those questioning the regulations out to be ogres, or worse, favoring financial skullduggery, rampant pollution, bribery and corruption, and reckless behavior that threatens the entire economy.

But as the old saying goes, the road to hell is paved with good intentions, and that is definitely true of regulation. Regulations often have unintended consequences–many of which are directly contrary to the stated intent. Furthermore, regulations entail costs as well as benefits, and just focusing on the benefits gives a completely warped understanding of the desirability of a regulation.

Take Frankendodd. It is bursting with unintended consequences. Most notably, quite predictably (and predicted here, early and often) the huge increase in regulatory overhead actually favors consolidation in the financial sector, and reinforces the TBTF problem. It also has been devastating to smaller community banks.

DFA also works at cross purposes. Consider the interaction between the leverage ratio, which is intended to insure that banks are sufficiently capitalized, and the clearing mandate, which is intended to reduce systemic risk arising from the derivatives markets. The interpretation of the leverage ratio (notably, treating customer margins held by FCMs as an FCM asset which increases the amount of capital it must hold due to the leverage ratio) makes offering clearing services more expensive. This is exacerbating the marked consolidation among FCMs, which is contrary to the stated purpose of Dodd-Frank. Moreover, it means that some customers will not be able to find clearing firms, or will find using derivatives to manage risk prohibitively expensive. This undermines the ability of the derivatives markets to allocate risk efficiently.

Therefore, to describe regulations by their intentions, rather than their effects, is highly misleading. Many of the effects are unintended, and directly contrary to the explicit intent.

One of the effects of regulation is that they impose costs, both direct and indirect.  A realistic appraisal of regulation requires a thorough evaluation of both benefits and costs. Such evaluations are almost completely lacking in the media coverage, except to cite some industry source complaining about the cost burden. But in the context of most articles, this comes off as special pleading, and therefore suspect.

Unfortunately, much cost benefit analysis–especially that carried out by the regulatory agencies themselves–is a bad joke. Indeed, since the agencies in question often have an institutional or ideological interest in their regulations, their “analyses” should be treated as a form of special pleading of little more reliability than the complaints of the regulated. The proposed position limits regulation provides one good example of this. Costs are defined extremely narrowly, benefits very broadly. Indirect impacts are almost completely ignored.

As another example, Tyler Cowen takes a look into the risible cost benefit analysis behind the Stream Protection Rule, and finds it seriously wanting. Even though he is sympathetic to the goals of the regulation, and even to the largely tacit but very real meta-intent (reducing the use of coal in order to advance  the climate change agenda), he is repelled by the shoddiness of the analysis.

Most agency cost benefit analysis is analogous to asking pupils to grade their own work, and gosh darn it, wouldn’t you know, everybody’s an A student!

This is particularly problematic under Chevron Deference, because courts seldom evaluate the substance of the regulations or the regulators’ analyses. There is no real judicial check and balance on regulators.

The metastasizing regulatory and administrative state is a very real threat to economic prosperity and growth, and to individual freedom. The lazy habit of describing regulations and regulators by their intent, rather than their effects, shields them from the skeptical scrutiny that they deserve, and facilitates this dangerous growth. If the Trump administration and Congress proceed with their stated plans to pare back the Obama administration’s myriad and massive regulatory expansion, this intent-focused coverage will be one of the biggest obstacles that they will face.  The media is the regulators’ most reliable paving contractor  for the highway to hell.

Print Friendly

December 30, 2016

For Whom the (Trading) Bell Tolls

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Exchanges,History — The Professor @ 7:40 pm

It tolls for the NYMEX floor, which went dark for the final time with the close of trading today. It follows all the other New York futures exchange floors which ICE closed in 2012. This leaves the CME and CBOE floors in Chicago, and the NYSE floor, all of which are shadows of shadows of their former selves.

Next week I will participate in a conference in Chicago. I’ll be talking about clearing, but one of the other speakers will discuss regulating latency arbitrage in the electronic markets that displaced the floors. In some ways, all the hyperventilating over latency arbitrages due to speed advantages measured in microseconds and milliseconds in computerized markets is amusing, because the floors were all about latency arbitrage. Latency arbitrage basically means that some traders have a time and space advantage, and that’s what the floors provided to those who traded there. Why else would traders pay hundreds of thousands of dollars to buy a membership? Because that price capitalized the rent that the marginal trader obtained by being on the floor, and seeing prices and order flow before anybody off the floor did. That was the price of the time and space advantage of being on the floor.  It’s no different than co-location. Not in the least. It’s just meatware co-lo, rather than hardware co-lo.

In a paper written around 2001 or 2002, “Upstairs, Downstairs”, I presented a model predicting that electronic trading would largely annihilate time and space advantages, and that liquidity would improve as a result because it would reduce the cost of off-floor traders to offer liquidity. The latter implication has certainly been borne out. And although time and space differences still exist, I would argue that they pale in comparison to those that existed in the floor era. Ironically, however, complaints about fairness seem more heated and pronounced now than they did during the heyday of the floors.  Perhaps that’s because machines and quant geeks are less sympathetic figures than colorful floor traders. Perhaps it’s because being beaten by a sliver of a second is more infuriating than being pipped by many seconds by some guy screaming and waving on the CBT or NYMEX. Dunno for sure, but I do find the obsessing over HFT time and space advantages today to be somewhat amusing, given the differences that existed in the “good old days” of floor trading.

This is not to say that no one complained about the advantages of floor traders, and how they exploited them. I vividly recall a very famous trader (one of the most famous, actually) telling me that he welcomed electronic trading because he was “tired of being fucked by the floor.” (He had made his reputation, and his first many millions on the floor, by the way.) A few years later he bemoaned how unfair the electronic markets were, because HFT firms could react faster than he could.

It will always be so, regardless of the technology.

All that said, the passing of the floors does deserve a moment of silence–another irony, given their cacophony.

I first saw the NYMEX floor in 1992, when it was still at the World Trade Center, along with the floors of the other NY exchanges (COMEX; Coffee, Sugar & Cocoa; Cotton). That space was the location for the climax of the plot of the iconic futures market movie, Trading Places. Serendipitously, that was the movie that Izabella Kaminska of FT Alphaville featured in the most recent Alphachat movie review episode. I was a guest on the show, and discussed the economic, sociological, and anthropological aspects of the floor, as well as some of the broader social issues lurking behind the film’s comedy. You can listen here.

 

Print Friendly

Next Page »

Powered by WordPress