Streetwise Professor

July 6, 2017

SWP Acid Flashback, CCP Edition

Filed under: Clearing,Derivatives,Economics,Financial crisis,Regulation — The Professor @ 6:09 pm

Sometimes reading current news about clearing specifically and post-crisis regulation generally triggers acid flashbacks to old blog posts. Like this one (from 2010!):

[Gensler’s] latest gurgling appears on the oped page of today’s WSJ.  It starts with a non-sequitur, and careens downhill from there.  Gensler tells a story about his role in the LTCM situation, and then claims that to prevent a recurrence, or a repeat of AIG, it is necessary to reduce the “cancerous interconnections” (Jeremiah Recycled Bad Metaphor Alert!) in the financial system by, you guessed it, mandatory clearing.

Look.  This is very basic.  Do I have to repeat it?  CLEARING DOES NOT ELIMINATE INTERCONNECTIONS AMONG FINANCIAL INSTITUTIONS.  At most, it reconfigures the topology of the network of interconnections.  Anyone who argues otherwise is not competent to weigh in on the subject, let alone to have regulatory responsibility over a vastly expanded clearing system.  At most you can argue that the interconnections in a cleared system are better in some ways than the interconnections in the current OTC structure.  But Gensler doesn’t do that.   He just makes unsupported assertion after unsupported assertion.

Jeremiah’s latest gurgling appears on the oped page of today’s WSJ.  It starts with a non-sequitur, and careens downhill from there.  Gensler tells a story about his role in the LTCM situation, and then claims that to prevent a recurrence, or a repeat of AIG, it is necessary to reduce the “cancerous interconnections” (Jeremiah Recycled Bad Metaphor Alert!) in the financial system by, you guessed it, mandatory clearing. Look.  This is very basic.  Do I have to repeat it?  CLEARING DOES NOT ELIMINATE INTERCONNECTIONS AMONG FINANCIAL INSTITUTIONS.  At most, it reconfigures the topology of the network of interconnections.  Anyone who argues otherwise is not competent to weigh in on the subject, let alone to have regulatory responsibility over a vastly expanded clearing system.  At most you can argue that the interconnections in a cleared system are better in some ways than the interconnections in the current OTC structure.  But Gensler doesn’t do that.   He just makes unsupported assertion after unsupported assertion.

So what triggered this flashback? This recent FSB (no! not Putin!)/BIS/IOSCO report on . . . wait for it . . . interdependencies in clearing. As summarized by Reuters:

The Financial Stability Board, the Committee on Payments and Market Infrastructures, the International Organization of Securities Commissioners and the Basel Committee on Banking Supervision, also raised new concerns around the interdependency of CCPs, which have become crucial financial infrastructures as a result of post-crisis reforms that forced much of the US$483trn over-the-counter derivatives market into central clearing.

In a study of 26 CCPs across 15 jurisdictions, the committees found that many clearinghouses maintain relationships with the same financial entities.

Concentration is high with 88% of financial resources, including initial margin and default funds, sitting in just 10 CCPs. Of the 307 clearing members included in the analysis, the largest 20 accounted for 75% of financial resources provided to CCPs.

More than 80% of the CCPs surveyed were exposed to at least 10 global systemically important financial institutions, the study showed.

In an analysis of the contagion effect of clearing member defaults, the study found that more than half of surveyed CCPs would suffer a default of at least two clearing members as a result of two clearing member defaults at another CCP.

This suggests a high degree of interconnectedness among the central clearing system’s largest and most significant clearing members,” the committees said in their analysis.

To reiterate: as I said in 2010 (and the blog post echoed remarks that I made at ISDA’s General Meeting in San Fransisco shortly before I wrote the post), clearing just reconfigures the topology of the network. It does not eliminate “cancerous interconnections”. It merely re-jiggers the connections.

Look at some of the network charts in the FSB/BIS/IOSCO report. They are pretty much indistinguishable from the sccaaarrry charts of interdependencies in OTC derivatives that were bruited about to scare the chillin into supporting clearing and collateral mandates.

The concentration of clearing members is particularly concerning. The report does not mention it, but this concentration creates other major headaches, such as the difficulties of porting positions if a big clearing member (or two) defaults. And the difficulties this concentration would produce in trying to auction off or hedge the positions of the big clearing firms.

Further, the report understates the degree of interconnections, and in fact ignores some of the most dangerous ones. It looks only at direct connections, but the indirect connections are probably more . . . what’s the word I’m looking for? . . . cancerous–yeahthat’s it. CCPs are deeply embedded in the liquidity supply and credit network, which connects all major (and most minor) players in the market. Market shocks that cause big price changes in turn cause big variation margin calls that reverberate throughout the entire financial system. Given the tight coupling of the liquidity system generally, and the particularly tight coupling of the margining mechanism specifically, this form of interconnection–not considered in the report–is most laden with systemic ramifications. As I’ve said ad nauseum: the connections that are intended to prevent CCPs from failing are exactly the ones that pose the greatest threat to the entire system.

To flash back to another of my past writings: this recent report, when compared to what Gensler said in 2010 (and others, notably Timmy!, were singing from the same hymnal), shows that clearing and collateral mandates were a bill of goods. These mandates were sold on the basis of lies large and small. And the biggest lie–and I said so at the time–was that clearing would reduce the interconnectivity of the financial system. So the FSB/BIS/IOSCO have called bullshit on Gary Gensler. Unfortunately, seven years too late.

 

Print Friendly

July 1, 2017

All Flaws Great and Small, Frankendodd Edition

On Wednesday I had the privilege to deliver the keynote at the FOW Trading Chicago event. My theme was the fundamental flaws in Frankendodd–you’re shocked, I’m sure.

What I attempted to do was to categorize the errors. I identified four basic types.

Unintended consequences contrary to the objectives of DFA. This could also be called “counter-intended consequences”–not just unintended, but the precise opposite of the stated intent. The biggest example is, well, related to bigness. If you wanted to summarize a primary objective of DFA, it would be “to reduce the too big to fail problem.” Well, the very nature of DFA means that in some ways it exacerbates TBTF. Most notably, the resulting regulatory burdens actually favor scale, because they impose largely fixed costs. I didn’t mention this in my talk, but a related effect is that increasing regulation leads to greater influence activities by the regulated, and for a variety of reasons this tends to favor the big over the medium and small.

Perhaps the most telling example of the perverse effects of DFA is that it has dramatically increased concentration among FCMs. This exacerbates a variety of sources of systemic risk, including concentration risk at CCPs; difficulties in managing defaulted positions and porting the positions of the customers of troubled FCMs; and greater interconnections across CCPs. Concentration also fundamentally undermines the ability of CCPs to mutualize default risk. It can also create wrong-way risks as the big FCMs are in some cases also sources of liquidity support to CCPs.

I could go on.

Creation of new risks due to misdiagnoses of old risks. The most telling example here is the clearing and collateral mandates, which were predicated on the view that too much credit was extended via OTC derivatives transactions. Collateral and netting were expected to reduce this credit risk.

This is a category error. For one thing, it embodies a fallacy of composition: reducing credit in one piece of an interconnected financial system that possesses numerous ways to create credit exposures does not necessarily reduce credit risk in the system as a whole. For another, even to the extent that reducing credit extended via derivatives transactions reduces overall credit exposures in the financial system, it does so by creating another risk–liquidity risk. This risk is in my view more pernicious for many reasons. One reason is that it is inherently wrong-way in nature: the mandates increase demands for liquidity precisely during those periods in which liquidity supply typically contracts. Another is that it increases the tightness of coupling in the financial system. Tight coupling increases the risk of catastrophic failure, and makes the system more vulnerable to a variety of different disruptions (e.g., operational risks such as the temporary failure of a part of the payments system).

As the Clearing Cassandra I warned about this early and often, to little avail–and indeed, often to derision and scorn. Belatedly regulators are coming to an understanding of the importance of this issue. Fed governor Jerome Powell recently emphasized this issue in a speech, and recommended CCPs engage in liquidity stress testing. In a scathing report, the CFTC Inspector General criticized the agency’s cost-benefit analysis of its margin rules for non-cleared swaps, based largely on its failure to consider liquidity effects. (The IG report generously cited my work several times.

But these are at best palliatives. The fundamental problem is inherent in the super-sizing of clearing and margining, and that problem is here to stay.

Imposition of “solutions” to non-existent problems. The best examples of this are the SEF mandate and position limits. The mode of execution of OTC swaps was not a source of systemic risk, and was not problematic even for reasons unrelated to systemic risk. Mandating a change to the freely-chosen modes of transaction execution has imposed compliance costs, and has also resulted in a fragmented swaps market: those who can escape the mandate (e.g., European banks trading € swaps) have done so, leading to bifurcation of the market for € swaps, which (a) reduces competition (another counter-intended consequence), and (b) reduces liquidity (also counter-intended).

The non-existence of a problem that position limits could solve is best illustrated by the pathetically flimsy justification for the rule set out in the CFTC’s proposal: the main example the CFTC mentioned is the Hunt silver episode. As I said during my talk, this is ancient history: when do we get to the Trojan War? If anything, the Hunts are the exception that proves the rule. The CFTC also pointed to Amaranth, but (a) failed to show that Amaranth’s activities caused “unreasonable and unwarranted price fluctuations,” and (b) did not demonstrate that (unlike the Hunt case) that Amaranth’s financial distress posed any threat to the broader market or any systemic risk.

It is sickly amusing that the CFTC touts that based on historical data, the proposed limits would constrain few, if any market participants. In other words, an entire industry must bear the burden of complying with a rule that the CFTC itself says would seldom be binding. Makes total sense, and surely passes a rigorous cost-benefit test! Constraining positions is unlikely to affect materially the likelihood of “unreasonable and unwarranted price fluctuations”. Regardless, positions are not likely to be constrained. Meaning that the probability that the regulation reduces such price fluctuations is close to zero, if not exactly equal to zero. Yet there would be an onerous, and ongoing cost to compliance. Not to mention that when the regulation would in fact bind, it would potentially constrain efficient risk transfer.

The “comma and footnote” problem. Such a long and dense piece of legislation, and the long and detailed regulations that it has spawned, inevitably contain problems that can lead to protracted disputes, and/or unpleasant surprises. The comma I refer to is in the position limit language of the DFA itself: as noted in the court decision that stymied the original CFTC position limit rule, the placement of the comma affects whether the language in the statute requires the CFTC to impose limits, or merely gives it the discretionary authority to do so in the even that it makes an explicit finding that the limits are required to reduce unwarranted and unreasonable price fluctuations. The footnotes I am thinking of were in the SEF rule: footnote 88 dramatically increased the scope of the rule, while footnote 513 circumscribed it.

And new issues of this sort crop up regularly, almost 7 years after the passage of Dodd-Frank. Recently Risk highlighted the fact that in its proposal for capital requirements on swap dealers, the CFTC (inadvertently?) potentially made it far more costly for companies like BP and Shell to become swap dealers. Specifically, whereas the Fed defines a financial company as one in which more than 85 percent of its activities are financial in nature, the CFTC proposes that a company can take advantage of more favorable capital requirements if its financial activities are less than 15 percent of its overall activities. Meaning, for example, a company with 80 percent financial activity would not count as a financial company under Fed rules, but would under the proposed CFTC rule. This basically makes it impossible for predominately commodity companies like BP and Shell to take advantage of preferential capital treatment specifically included for them and their ilk in DFA. To the extent that these firms decide to incur costs (higher capital costs, or the cost of reorganizing their businesses to escape the rule’s bite) and become swap dealers nonetheless, that cost will not generate any benefit. To the extent that they decide that it is not worth the cost, the swaps market will be more concentrated and less competitive (more counter-intended effects).

The position limits proposed regs provide a further example of this devil-in-the-details problem. The idea of a hedging carveout is eminently sensible, but the specifics of the CFTC’s hedging exemptions were unduly restrictive.

I could probably add more categories to the list. Different taxonomies are possible. But I think the foregoing is a useful way of thinking about the fundamental flaws in Frankendodd.

I’ll close with something that could make you feel better–or worse! For all the flaws in Frankendodd, MiFID II and EMIR make it look like a model of legislative and regulatory wisdom. The Europeans have managed to make errors in all of these categories–only more of them, and more egregious ones. For instance, as bad as the the US position limit proposal is, it pales in comparison to the position limit regulations that the Europeans are poised to inflict on their firms and their markets.

 

Print Friendly

May 6, 2017

Son of Glass-Steagall: A Nostrum, Prescribed by Trump

Filed under: Economics,Financial crisis,History,Politics,Regulation — The Professor @ 7:30 pm

Apologies for the posting hiatus. I was cleaning out my mother’s house in preparation for her forthcoming move, a task that vies with the Labors of Hercules. I intended to post, but I was just too damn tired at the end of each day.

I’ll ease back into things by giving a heads up on my latest piece in The Hill, in which I argue that reviving Glass-Steagall’s separation of commercial and investment banking is a solution in search of a problem. One thing that I find telling is that the problem the original was intended to address in the 1930s was totally different than the one that is intended to address today. Further, the circumstances in the 1930s were wildly different from present conditions.

In the 1930s, the separation was intended to prevent banks from fobbing off bad commercial and sovereign loans to unwitting investors through securities underwriting. This problem in fact did not exist: extensive empirical evidence has shown that debt securities underwritten by universal banks (like J.P. Morgan) were of higher quality and performed better ex post than debt underwritten by stand alone investment banks. Further, the  most acute problem of the US banking system was not too big to fail, but too small to succeed. The banking crisis of the 1930s was directly attributable to the fragmented nature of the US banking system, and the proliferation of thousands of small, poorly diversified, thinly capitalized banks. The bigger national banks, and in particular the universal ones, were not the problem in 1932-33. Further, as Friedman-Schwartz showed long ago, a blundering Fed implemented policies that were fatal to such a rickety system.

In contrast, today’s issue is TBTF. But, as I note in The Hill piece, and have written here on occasion, Glass-Steagall separation would not have prevented the financial crisis. The institutions that failed were either standalone investment banks, GSE’s, insurance companies involved in non-traditional insurance activities, or S&Ls. Universal banks that were shaky (Citi, Wachovia) were undermined by traditional lending activities. Wachovia, for instance, was heavily exposed to mortgage lending through its acquisition of a big S&L (Golden West Financial). There was no vector of contagion between the investment banking activities and the stability of any large universal bank.

As I say in The Hill, whenever the same prescription is given for wildly different diseases, it’s almost certainly a nostrum, rather than a cure.

Which puts me at odds with Donald Trump, for he is prescribing this nostrum. Perhaps in an effort to bring more clicks to my oped, the Monday after it appeared Trump endorsed a Glass-Steagall revival. This was vintage Trump. You can see his classic MO. He has a vague idea about a problem–TBTF. Not having thought deeply about it, he seizes upon a policy served up by one of his advisors (in this case, Gary Cohn, ex-Goldman–which would benefit from a GS revival), and throws it out there without much consideration.

The main bright spot in the Trump presidency has been his regulatory rollback, in part because this is one area in which he has some unilateral authority. Although I agree generally with this policy, I am under no illusions that it rests on deep intellectual foundations. His support of Son of Glass-Steagall shows this, and illustrates that no one (including Putin!) should expect an intellectually consistent (or even coherent) policy approach. His is, and will be, an instinctual presidency. Sometimes his instincts will be good. Sometimes they will be bad. Sometimes his instincts will be completely contradictory–and the call for a return to a very old school regulation in the midst of a largely deregulatory presidency shows that quite clearly.

 

Print Friendly

February 14, 2017

“First, Kill All the Economists!” Sounds Great to Some, But It Won’t Fix Monetary Policy

Filed under: Economics,Financial crisis,Financial Crisis II,History,Regulation — The Professor @ 9:00 pm

A former advisor to the Dallas Fed has penned a book blasting the Fed for being ruled by a “tribe” of insular egghead economics PhDs:

In her book, Ms. Booth describes a tribe of slow-moving Fed economists who dismiss those without high-level academic credentials. She counts Fed Chairwoman Janet Yellen and former Fed leader Ben Bernanke among them. The Fed, Mr. Bernanke and the Dallas Fed declined to comment.

The Fed’s “modus operandi” is defined by “hubris and myopia,” Ms. Booth writes in an advance copy of the book. “Central bankers have invited politicians to abdicate leadership authority to an inbred society of PhD academics who are infected to their core with groupthink, or as I prefer to think of it: ‘groupstink.’”

“Global systemic risk has been exponentially amplified by the Fed’s actions,” Ms. Booth writes, referring to the central bank’s policies holding interest rates very low since late 2008. “Who will pay when this credit bubble bursts? The poor and middle class, not the elites.”

Ms. Booth is an acolyte of her former boss, Dallas Fed chair Richard Fisher, who said “If you rely entirely on theory, you are not going to conduct the right policy, because policies have consequences.”

I have very mixed feelings about this. There is no doubt that under the guidance of academics, including (but not limited to) Ben Bernanke, that the Fed has made some grievous errors. But it is a false choice to claim that Practical People can do better without a coherent theoretical framework. For what is the alternative to theory? Heuristics? Rules of thumb? Experience?

Two thinkers usually in conflict–Keynes and Hayek– were of of one mind on this issue. Keynes famously wrote:

Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.

For his part, Hayek said “without a theory the facts are silent.”

Everybody–academic economist or no–is beholden to some theory or another. It is a conceit of non-academics to believe that they are “exempt from any intellectual influence.” Indeed, the advantage of following an explicit theoretical framework is that its assumptions and implications are transparent and (usually) testable, and therefore can be analyzed, challenged, and improved. An inchoate and largely informal “practical” mindset (which often is a hodgepodge of condensed academic theories) is far more amorphous and difficult to understand or challenge. (Talk to a trader about monetary policy sometime if you doubt me.)

Indeed, Ms. Booth gives evidence of this. Many have been prophesying doom as a result of the Fed’s (and the ECB’s) post-2008 policies: Ms. Booth is among them. I will confess to have harbored such concerns, and indeed, challenged Ben Bernanke on this at a Fed conference on Jekyll Island in May, 2009. It may happen sometime, and I believe that ZIRP has indeed distorted the economy, but my fears (and Ms. Booth’s) have not been realized in eight plus years.

Ms. Booth’s critique of pre-crisis Fed policy is also predicated on a particular theoretical viewpoint, namely, that the Fed fueled a credit bubble prior to the Crash. But as scholars as diverse as Scott Sumner and John Taylor have argued, Fed policy was actually too tight prior to the crisis.

Along these lines, one could argue that the Fed’s most egregious errors are not the consequence of deep DSGE theorizing, but instead result from the use of rules of thumb and a failure to apply basic economics. As Scott Sumner never tires of saying (and sadly, must keep repeating because those who are slaves to the rule of thumb are hard of hearing and learning) the near universal practice of using interest rates as a measure of the state of monetary policy is a category error: befitting a Chicago trained economist, Scott cautions never argue from a price change, but look for the fundamental supply and demand forces that cause a price (e.g., an interest rate to be high or low). (As a Chicago guy, I have been beating the same drum for more than 30 years.)

And some historical perspective is in order. The Fed’s history is a litany of fumbles, some relatively minor, others egregious. Blame for the Great Depression and the Great Inflation can be laid directly at the Fed’s feet. Its most notorious failings were not driven by the prevailing academic fashion, but occurred under the leadership of practical people, mainly people with a banking background,  who did quite good impressions of madmen in authority. Ms. Booth bewails the “hubris of Ph.D. economists who’ve never worked on the Street or in the City,” but people who have worked there have screwed up monetary policy when they’ve been in charge.

As tempting as it may sound, “First, kill all the economists!” is not a prescription for better monetary policy. Economists may succumb to hubris (present company excepted, of course!) but the real hubris is rooted in the belief that central banks can overcome the knowledge problem, and can somehow manage entire economies (and the stability of the financial system). Hayek pointedly noted the “fatal conceit” of central planning. That conceit is inherent in central banking, too, and is not limited to professionally trained economists. Indeed, I would venture that academics are less vulnerable to it.

The problem, therefore, is not who captains the monetary ship. The question is whether anyone is capable of keeping such a huge and unwieldy vessel off the shoals. Experience–and theory!–suggests no.

 

Print Friendly

February 4, 2017

The Regulatory Road to Hell

One of the most encouraging aspects of the new administration is its apparent commitment to rollback a good deal of regulation. Pretty much the entire gamut of regulation is under examination, and even Trump’s nominee for the Supreme Court, Neil Gorsuch, represents a threat to the administrative state due to his criticism of Chevron Deference (under which federal courts are loath to question the substance of regulations issued by US agencies).

The coverage of the impending regulatory rollback is less that informative, however. Virtually every story about a regulation under threat frames the issue around the regulation’s intent. The Fiduciary Rule “requires financial advisers to act in the best interests of their clients.” The Stream Protection Rule prevents companies from “dumping mining waste into streams and waterways.” The SEC rule on reporting of payments to foreign governments by energy and minerals firms “aim[s] to address the ‘resource curse,’ in which oil and mineral wealth in resource-rich countries flows to government officials and the upper classes, rather than to low-income people.” Dodd-Frank is intended prevent another financial crisis. And on and on.

Who could be against any of these things, right? This sort of framing therefore makes those questioning the regulations out to be ogres, or worse, favoring financial skullduggery, rampant pollution, bribery and corruption, and reckless behavior that threatens the entire economy.

But as the old saying goes, the road to hell is paved with good intentions, and that is definitely true of regulation. Regulations often have unintended consequences–many of which are directly contrary to the stated intent. Furthermore, regulations entail costs as well as benefits, and just focusing on the benefits gives a completely warped understanding of the desirability of a regulation.

Take Frankendodd. It is bursting with unintended consequences. Most notably, quite predictably (and predicted here, early and often) the huge increase in regulatory overhead actually favors consolidation in the financial sector, and reinforces the TBTF problem. It also has been devastating to smaller community banks.

DFA also works at cross purposes. Consider the interaction between the leverage ratio, which is intended to insure that banks are sufficiently capitalized, and the clearing mandate, which is intended to reduce systemic risk arising from the derivatives markets. The interpretation of the leverage ratio (notably, treating customer margins held by FCMs as an FCM asset which increases the amount of capital it must hold due to the leverage ratio) makes offering clearing services more expensive. This is exacerbating the marked consolidation among FCMs, which is contrary to the stated purpose of Dodd-Frank. Moreover, it means that some customers will not be able to find clearing firms, or will find using derivatives to manage risk prohibitively expensive. This undermines the ability of the derivatives markets to allocate risk efficiently.

Therefore, to describe regulations by their intentions, rather than their effects, is highly misleading. Many of the effects are unintended, and directly contrary to the explicit intent.

One of the effects of regulation is that they impose costs, both direct and indirect.  A realistic appraisal of regulation requires a thorough evaluation of both benefits and costs. Such evaluations are almost completely lacking in the media coverage, except to cite some industry source complaining about the cost burden. But in the context of most articles, this comes off as special pleading, and therefore suspect.

Unfortunately, much cost benefit analysis–especially that carried out by the regulatory agencies themselves–is a bad joke. Indeed, since the agencies in question often have an institutional or ideological interest in their regulations, their “analyses” should be treated as a form of special pleading of little more reliability than the complaints of the regulated. The proposed position limits regulation provides one good example of this. Costs are defined extremely narrowly, benefits very broadly. Indirect impacts are almost completely ignored.

As another example, Tyler Cowen takes a look into the risible cost benefit analysis behind the Stream Protection Rule, and finds it seriously wanting. Even though he is sympathetic to the goals of the regulation, and even to the largely tacit but very real meta-intent (reducing the use of coal in order to advance  the climate change agenda), he is repelled by the shoddiness of the analysis.

Most agency cost benefit analysis is analogous to asking pupils to grade their own work, and gosh darn it, wouldn’t you know, everybody’s an A student!

This is particularly problematic under Chevron Deference, because courts seldom evaluate the substance of the regulations or the regulators’ analyses. There is no real judicial check and balance on regulators.

The metastasizing regulatory and administrative state is a very real threat to economic prosperity and growth, and to individual freedom. The lazy habit of describing regulations and regulators by their intent, rather than their effects, shields them from the skeptical scrutiny that they deserve, and facilitates this dangerous growth. If the Trump administration and Congress proceed with their stated plans to pare back the Obama administration’s myriad and massive regulatory expansion, this intent-focused coverage will be one of the biggest obstacles that they will face.  The media is the regulators’ most reliable paving contractor  for the highway to hell.

Print Friendly

October 4, 2016

Going Deutsche: Beware Politicians Adjudicating Political Bargains Gone Bad

A few years ago, when doing research on the systemic risk (or not) of commodity trading firms, I thought it would be illuminating to compare these firms to major banks, to demonstrate that (a) commodity traders were really not that big, when compared to systemically important financial institutions, and (b) their balance sheets, though leveraged, were not as geared as banks and unlike banks did not involve the maturity and liquidity transformations that make banks subject to destabilizing runs. One thing that jumped out at me was just what a monstrosity Deutsche Bank was, in terms of size and leverage and Byzantine complexity. Its

My review (conducted in 2012 and again in 2013) looked back several years.  For instance, in 2013, the bank’s leverage ratio was around 37 to 1, and its total assets were over $2 trillion.

Since then, Deutsche has reduced its leverage somewhat, but it is still huge, highly leveraged (especially in comparison to its American peers), and deeply interconnected with all other major financial institutions, and a plethora of industrial and service firms.

This makes its current travails a source of concern. The stock price has fallen to record low levels, and its CDS spreads have spiked to post-crisis highs. The CDS curve is also flattening, which is particularly ominous. Last week, Bloomberg reported signs of a mini-run, not by depositors, but by hedge funds and others who were moving collateral and cleared derivatives positions to other FCMs. (I’ve seen no indication that people are looking to novate OTC deals in order to replace Deutsche as a counterparty, which would be a real harbinger of problems.)

Ironically, the current crisis was sparked by chronic indigestion from the last crisis, namely the legal and regulatory issues related to US subprime. The US Department of Justice presented a settlement demand of $14 billion dollars, which if paid, would put the bank at risk of breaching its regulatory capital requirements: the bank has only reserved $5 billion. Deutsche’s stock price and CDS have lurched up and down over the past few days, driven mainly by news regarding how these legal issues would be resolved.

The $14 billion US demand is only one of Deutsche’s sources of legal agita, most of which are also the result of pre-crisis and crisis issues, such as the IBOR cases and charges that it facilitated accounting chicanery at Italian banks.

Deutsche’s problems are political poison in Germany, for Merkel in particular. She is in a difficult situation. Bailouts are no more popular in Europe than in the US, but if anyone is too big to fail, it is Deutsche. Serious problems there could portend another financial crisis, and one in which the epicenter would be Germany. Merkel and virtually all other politicians in Germany have adamantly stated there would be no bailouts: politically, they have to. But such unconditional statements are not credible–that’s the essence of the TBTF problem. If Deutsche teeters, Germany–no doubt aided by the ECB and the Fed–will be forced to act. This would have seismic political effects, particularly in Europe, and especially particularly in southern Europe, which believes that it has been condemned to economic penury to protect German economic interests, not least of which is Deutsche Bank.

No doubt the German government, the Bundesbank, and the ECB are crafting bailouts that don’t look like bailouts–at least if you don’t look too closely. One idea I saw floated was to sell off Deutsche assets to other entities, with the asset values guaranteed. Since direct government guarantees would be too transparent (and perhaps contrary to EU law), no doubt the guarantees will be costumed in some way as well.

The whole mess points out the inherently political nature of banking, and how the political bargain (in the phrase of Calomaris and Haber in Fragile by Design) has changed. As they show quite persuasively (as have others, such as Ragu Rajan), the pre-crisis political bargain was that banks would facilitate income redistribution policy by provide credit to low income individuals. This seeded the crisis (though like any complex event, there were myriad other contributing causal factors), the political aftershocks of which are being felt to this day. Banking became a pariah industry, as the very large legal settlements extracted by governments indicate.

The difficulty, of course, is that banks are still big and systemically important, and as the Deutsche Bank situation demonstrates, punishing for past misdeeds that contributed to the last crisis could, if taken too far, create a new one. This is particularly true in the Brave New World of post-crisis monetary policy, with its zero or negative interest rates, which makes it very difficult for banks to earn a profit by doing business the old fashioned way (borrow at 3, lend at 6, hit the links by 3) as politicians claim that they desire.

It is definitely desirable to have mechanisms to hold financial malfeasors accountable, but the Deutsche episode illustrates several difficulties. The first is that even the biggest entities can be judgment proof, and imposing judgments on them can have disastrous economic externalities. Another is that there is a considerable degree of arbitrariness in the process, and the results of the process. There is little due process here, and the risks and costs of litigation mean that the outcome of attempts to hold bankers accountable is the result of a negotiation between the state and large financial institutions that is carried out in a highly politicized environment in which emotions and narratives are likely to trump facts. There is room for serious doubt about the quality of justice that results from this process. Waving multi-billion dollar scalps may be emotionally and politically satisfying, but arbitrariness in the process and the result means that the law and regulation will not have an appropriate deterrence effect. If it is understood that fines are the result of a political lottery, the link between conduct and penalty is tenuous, at best, meaning that the penalties will be a very poor way of deterring bad conduct.

Further, it must always be remembered that what happened in the 2000s (and what happened prior to every prior banking crisis) was the result of a political bargain. Holding bankers to account for abusing the terms of the bargain is fine, but unless politicians and regulators are held to account, there will be future political bargains that will result in future crises. To have a co-conspirator in the deals that culminated in the financial crisis–the US government–hold itself out as the judge and jury in these matters will not make things better. It is likely to make things worse, because it only increases the politicization of finance. Since that politicization is is at the root of financial crises, that is a disturbing development indeed.

So yes, bankers should be at the bar. But they should not be alone. And they should be joined there by the very institutions who presume to bring them to justice.

Print Friendly

September 16, 2016

De Minimis Logic

CFTC Chair Timothy Massad has come out in support of a one year delay of the lowering of the de minimis swap dealer exemption notional amount from $8 billion to $3 billion. I recall Coase  (or maybe it was Stigler) writing somewhere that an economist could pay for his lifetime compensation by delaying implementation of an inefficient law by even a day. By that reckoning, by delaying the step down of the threshold for a year Mr. Massad has paid for the lifetime compensation of his progeny for generations to come, for the de minimis threshold is a classic analysis of an inefficient law. Mr. Massad (and his successors) could create huge amounts of wealth by delaying its implementation until the day after forever.

There are at least two major flaws with the threshold. The first is that there is a large fixed cost to become a swap dealer. Small to medium-sized swap traders who avoid the obligation of becoming swap dealers under the $8 billion threshold will not avoid it under the lower threshold. Rather than incur the fixed cost, many of those who would be caught with the lower threshold will decide to exit the business. This will reduce competition and increase concentration in the swap market. This is perversely ironic, given that one ostensible purpose of Frankendodd (which was trumpeted repeatedly by its backers) was to increase competition and reduce concentration.

The second major flaw is that the rationale for the swap dealer designation, and the associated obligations, is to reduce risk. Big swap dealers mean big risk, and to reduce that risk, they are obligated to clear, to margin non-cleared swaps, and hold more capital. But notional amount is a truly awful measure of risk. $X billion of vanilla interest rate swaps differ in risk from $X billion of CDS index swaps which differ in risk from $X billion of single name CDS which differ in risk from $X billion of oil swaps. Hell, $X billion of 10 year interest rate swaps differ in risk from $X billion of 2 year interest rate swaps. And let’s not even talk about the variation across diversified portfolios of swaps with the same notional values. So notional does not match up with risk in a discriminating way.  Further, turnover doesn’t measure risk very well either.

But hey! We can measure notional! So notional it is! Yet another example of the regulatory drunk looking for his keys under the lamppost because that’s where the light is.

So bully for Chairman Massad. He has delayed implementation of a regulation that will do the opposite of some of the things it is intended to do, and merely fails to do other things it is supposed to do. Other than that, it’s great!

Print Friendly

August 20, 2016

On Net, This Paper Doesn’t Tell Us Much About What We Need to Know About the Effects of Clearing

Filed under: Clearing,Derivatives,Economics,Financial crisis,Politics,Regulation — The Professor @ 4:26 pm

A recent Office of Financial Research paper by Samim Ghamami and Paul Glasserman asks “Does OTC Derivatives Reform Incentivize Central Clearing?” Their answer is, probably not.

My overarching comment is that the paper is a very precise and detailed answer to maybe not the wrong question, exactly, but very much a subsidiary one. The more pressing questions include: (i) Do we want to favor clearing vs. bilateral? Why? What metric tells us that is the right choice? (The paper takes the answer to this question as given, and given as “yes.”) (ii) How do the different mechanisms affect the allocation of risk, including the allocation of risk outside the K banks that are the sole concern in the paper? (iii) How will the rules affect the scale of derivatives trading (the paper takes positions as given) and the allocation across cleared and bilateral instruments? (iv) Following on (ii) and (iii) will the rules affect risk management by end-users and what is the implication of that for the allocation of risk in the economy?

Item (iv) has received too little attention in the debates over clearing and collateral mandates. To the extent that clearing and collateral mandates make it more expensive for end-users to manage risk, how will the end users respond? Will they adjust capital structures? Investment? The scale of their operations? How will this affect the allocation of risk in the broader economy? How will this affect output and growth?

The paper also largely ignores one of the biggest impediments to central clearing–the leverage ratio.  (This regulation receives on mention in passing.) The requirement that even segregated client margins be treated as assets for the purpose of calculating this ratio (even though the bank does not have a claim on these margins) greatly increases the capital costs associated with clearing, and is leading some banks to exit the clearing business or to charge fees that make it too expensive for some firms to trade cleared derivatives. This brings all the issues in (iv) to the fore, and demonstrates that certain aspects of the massive post-crisis regulatory scheme are not well thought out, and inconsistent.

Of course, the paper also focuses on credit risk, and does not address liquidity risk issues at all. Perhaps this is a push between bilateral vs. cleared in a world where variation margin is required for all derivatives transactions, but still. The main concern about clearing and collateral mandates (including variation margin) is that they can cause huge increases in the demand for liquidity precisely at times when liquidity dries up. Another concern is that collateral supply mechanisms that develop in response to the mandates create new interconnections and new sources of instability in the financial system.

The most disappointing part of the paper is that it focuses on netting economies as the driver of cost differences between bilateral and cleared trading, without recognizing that the effects of netting are distributive. To oversimplify only a little, the implication of the paper is that the choice between cleared and bilateral trading is driven by which alternative redistributes the most risk to those not included in the model.

Viewed from that perspective, things look quite different, don’t they? It doesn’t matter whether the answer to that question is “cleared” or “bilateral”–the result will be that if netting drives the answer, the answer will result in the biggest risk transfer to those not considered in the model (who can include, e.g., unsecured creditors and the taxpayers). This brings home hard the point that these types of analyses (including the predecessor of Ghamami-Glasserman, Zhu-Duffie) are profoundly non-systemic because they don’t identify where in the financial system the risk goes. If anything, they distract attention away from the questions about the systemic risks of clearing and collateral mandates. Recognizing that the choice between cleared and bilateral trading is driven by netting, and that netting redistributes risk, the question should be whether that redistribution is desirable or not. But that question is almost never asked, let alone answered.

One narrower, more technical aspect of the paper bothered me. G-G introduce the concept of a concentration ratio, which they define as the ratio of a firm’s contribution to the default fund to the firm’s value at risk used to determine the sizing of the default fund. They argue that the default fund under a cover two standard (in which the default fund can absorb the loss arising from the simultaneous defaults of the two members with the largest exposures) is undersized if the concentration ratio is less than one.

I can see their point, but its main effect is to show that the cover two standard is not joined up closely with the true determinants of the risk exposure of the default fund. Consider a CCP with N identical members, where N is large: in this case, the concentration ratio is small. Further, assume that member defaults are independent, and occur with probability p. The loss to the default fund conditional on the default of a given member is X. Then, the expected loss of the default fund is pNX, and under cover two, the size of the fund is 2X.  There will be some value of N such that for a larger number of members, the default fund will be inadequate. Since the concentration ratio varies inversely with N, this is consistent with the G-G argument.

But this is a straw man argument, as these assumptions are obviously extreme and unrealistic. The default fund’s exposure is driven by the extreme tail of the joint distribution of member losses. What really matters here is tail dependence, which is devilish hard to measure. Cover two essentially assumes a particular form of tail dependence: if the 1st (2nd) largest exposure defaults, so will the 2nd (1st) largest, but it ignores what happens to the remaining members. The assumption of perfect tail dependence between risks 1 and 2 is conservative: ignoring risks 3 through N is not. Where things come out on balance is impossible to determine. Pace G-G, when N is large ignoring 3-to-N is likely very problematic, but whether this results in an undersized default fund depends on whether this effect is more than offset by the extreme assumption of perfect tail dependence between risks 1 and 2.

Without knowing more about the tail dependence structure, it is impossible to play Goldilocks and say that this default fund is too large,  this default fund is too small, and this one is just right by looking at N (or the concentration ratio) alone. But if we could confidently model the tail dependence, we wouldn’t have to use cover two–and we could also determine individual members’ appropriate contributions more exactly than relying on a pro-rata rule (because we could calculate each member’s marginal contribution to the default fund’s risk).

So cover two is really a confession of our ignorance. A case of sizing the default fund based on what we can measure, rather than what we would like to measure, a la the drunk looking for his keys under the lamppost, because the light is better there. Similarly, the concentration ratio is something that can be measured, and does tell us something about whether the default fund is sized correctly, but it doesn’t tell us very much. It is not a sufficient statistic, and may not even be a very revealing one. And how revealing it is may differ substantially between CCPs, because the tail dependence structures of members may vary across them.

In sum, the G-G paper is very careful, and precisely identifies crucial factors that determine the relative private costs of cleared vs. bilateral trading, and how regulations (e.g., capital requirements) affect these costs. But this is only remotely related to the question that we would like to answer, which is what are the social costs of alternative arrangements? The implicit assumption is that the social costs of clearing are lower, and therefore a regulatory structure which favors bilateral trading is problematic. But this assumes facts not in evidence, and ones that are highly questionable. Further, the paper (inadvertently) points out a troubling reality that should have been more widely recognized long ago (as Mark Roe and I have been arguing for years now): the private benefits of cleared vs. bilateral trading are driven by which offers the greatest netting benefit, which also just so happens to generate the biggest risk transfer to those outside the model. This is a truly systemic effect, but is almost always ignored.

In these models that focus on a subset of the financial system, netting is always a feature. In the financial system at large, it can be a bug. Would that the OFR started to investigate that issue.

Print Friendly

August 5, 2016

Bipartisan Stupidity: Restoring Glass-Steagall

Filed under: Economics,Financial crisis,Financial Crisis II,Politics,Regulation — The Professor @ 6:35 pm

Both parties officially favor a restoration of Glass-Steagall, the Depression-era banking regulation that persisted until repealed under the Clinton administration in 1999. When both Parties agree on an issue, they are likely wrong, and that is the case here.

The homage paid to Glass-Steagall is totem worship, not sound economic policy. The reasoning appears to be that the banking system was relatively quiescent when Glass-Steagall was in place, and a financial crisis occurred within a decade after its repeal. Ergo, we can avoid financial crises by restoring G-S. This makes as much sense as blaming the tumult of the 60s on auto companies’ elimination of tail fins.

Glass-Steagall had several parts, some of which are still in existence. The centerpiece of the legislation was deposit insurance, which rural and small town banking interests had been pushing for years. Deposit insurance is still with us, and its effects are mixed, at best.

One of the parts of Glass-Steagall that was abolished was its limitation on bank groups: the 1933 Act made it more difficult to form holding companies of multiple banks as a way of circumventing branch banking restrictions that were predominant at the time. This was perverse because (1) the Act was ostensibly intended to prevent banking crises, and (2) the proliferation of unit banks due to restrictions on branch banking was one of the most important causes of the banking crisis that ushered in the Great Depression.

The contrast between the experiences of Canada and the United States is illuminating in this regard. Both countries were subjected to a huge adverse economic shock, but Canada’s banking system, which was dominated by a handful of banks that operated branches throughout the country, survived, whereas the fragmented US banking system collapsed. In the 1930s, too big to fail was less of a problem than to small to survive. The collapse of literally thousands of banks devastated the US economy, and this banking crisis ushered in the Depression proper. Further, the inability of branched national banks to diversify liquidity risk (as Canada’s banks were able to do) made the system more dependent on the Fed to manage liquidity shocks. That turned out to be a true systemic risk, when the Fed botched the job (as documented by Friedman and Schwartz). When the system is very dependent on one regulatory body, and that body fails, the effect of the failure is systemic.

The vulnerability of small unit banks was again demonstrated in the S&L fiasco of the 1980s (a crisis in which deposit insurance played a part).

So that part of Glass-Steagall should remain dead and buried.

The part of Glass-Steagall that was repealed, and which its worshippers are most intent on restoring, was the separation of securities underwriting from commercial banking and the limiting of banks securities holdings to investment grade instruments.

Senator Glass believed that the combination of commercial and investment banking contributed to the 1930s banking crisis. As is the case with many legislators, his fervent beliefs were untainted by actual evidence. The story told at the time (and featured in the Pecora Hearings) was that commercial banks unloaded their bad loans into securities, which they dumped on an unsuspecting investing public unaware that they were buying toxic waste.

There are only two problems with this story. First, even if true, it would mean that banks were able to get bad assets off their balance sheets, which should have made them more stable! Real money investors, rather than leveraged institutions were wearing the risk, which should have reduced the likelihood of banking crises.

Second, it wasn’t true. Economists (including Kroszner and Rajan) have shown that securities issued by investment banking arms of commercial banks performed as well as those issued by stand-alone investment banks. This is inconsistent with the asymmetric information story.

Now let’s move forward almost 60 years and try to figure whether the 2008 crisis would have played out much differently had investment banking and commercial banking been kept completely separate. Almost certainly not. First, the institutions in the US that nearly brought down the system were stand alone investment banks, namely Lehman, Bear-Sterns, and Merrill Lynch. The first failed. The second two were absorbed into commercial banks, the first by having the Fed take on most of the bad assets, the second in a shotgun wedding that ironically proved to make the acquiring bank–Bank of America–much weaker. Goldman Sachs and Morgan-Stanley were in dire straits, and converted into banks so that they could avail themselves of Fed support denied them as investment banks.

The investment banking arms of major commercial banks like JP Morgan did not imperil their existence. Citi may be something of an exception, but earlier crises (e.g., the Latin American debt crisis) proved that Citi was perfectly capable of courting insolvency even as a pure commercial bank in the pre-Glass-Steagall repeal days.

Second, and relatedly, because they could not take deposits, and therefore had to rely on short term hot money for funding, the stand-alone investment banks were extremely vulnerable to funding runs, whereas deposits are a “stickier,” more stable source of funding. We need to find ways to reduce reliance on hot funding, rather than encourage it.

Third, Glass-Steagall restrictions weren’t even relevant for several of the institutions that wreaked the most havoc–Fannie, Freddie, and AIG.

Fourth, insofar as the issue of limitations on the permissible investments of commercial banks is concerned, it was precisely investment grade–AAA and AAA plus, in fact–that got banks and investment banks into trouble. Capital rules treated such instruments favorably, and voila!, massive quantities of these instruments were engineered to meet the resulting demand. They way they were engineered, however, made them reservoirs of wrong way risk that contributed significantly to the 2008 doom loop.

In sum: the banking structures that Glass-Steagall outlawed didn’t contribute to the banking crisis that was the law’s genesis, and weren’t materially important in causing the 2008 crisis. Therefore, advocating a return to Glass-Steagall as a crisis prevention mechanism is wholly misguided. Glass-Steagall restrictions are largely irrelevant to preventing financial crises, and some of their effects–notably, the creation of an investment banking industry largely reliant on hot, short term money for funding–actually make crises more likely.

This is why I say that Glass-Steagall has a totemic quality. The reverence shown it is based on a fondness for the old gods who were worshipped during a time of relative economic quiet (even though that is the product of folk belief, because it ignores the LatAm, S&L, and Asian crises, among others, that occurred from 1933-1999). We had a crisis in 2008 because we abandoned the old gods, Glass and Steagall! If we only bring them back to the public square, good times will return! It is not based on a sober evaluation of history, economics,  or the facts.

An alternative tack is taken by Luigi Zingales. He advocates a return to Glass-Steagall in part based on political economy considerations, namely, that it will increase competition and reduce the political power of large financial institutions. As I argued in response to him over four years ago, these arguments are unpersuasive. I would add another point, motivated by reading Calamaris and Haber’s Fragile by Design: the political economy of a fragmented financial system can lead to disastrous results too. Indeed, the 1930s banking crisis was caused largely by the ubiquity of small unit banks and the failure of the Fed to provide liquidity in such a system that was uniquely dependent on this support. Those small banks, as Calomaris and Haber show, used their political power to stymie the development of national branched banks that would have improved systemic stability. The S&L crisis was also stoked by the political power of many small thrifts.*

But regardless, both the Republican and Democratic Parties have now embraced the idea. I don’t sense a zeal in Congress to do so, so perhaps the agreement of the Parties’ platforms on this issue will not result in a restoration of Glass-Steagall. Nonetheless, the widespread fondness for the 83 year old Act should give pause to those who look to national politicians to adopt wise economic policies. That fondness is grounded in a variety of religious belief, not reality.

*My reading of Calomaris and Haber leads me to the depressing conclusion that the political economy of banking is almost uniformly dysfunctional, at all times and at all places. In part this is because the state looks upon the banking system to facilitate fiscal objectives. In part it is because politicians have viewed the banking system as an indirect way of supporting favored domestic constituencies when direct transfers to these constituencies are either politically impossible or constitutionally barred. In part it is because bankers exploit this symbiotic relationship to get political favors: subsidies, restrictions on competition, etc. Even the apparent successes of banking legislation and regulation are more the result of unique political conditions rather than economically enlightened legislators. Canada’s banking system, for instance, was not the product of uniquely Canadian economic insight and political rectitude. Instead, it was the result of a political bargain that was driven by uniquely Canadian political factors, most notably the deep divide between English and French Canada. It was a venal and cynical political deal that just happened to have some favorable economic consequences which were not intended and indeed were not necessarily even understood or foreseen by those who drafted the laws.

Viewed in this light, it is not surprising that the housing finance system in the US, which was the primary culprit for the 2008 crisis, has not been altered substantially. It was the product of a particular set of political coalitions that still largely exist.

The history of federal and state banking regulation in the US also should give pause to those who think a minimalist state in a federal system can’t do much harm. Banking regulation in the small government era was hardly ideal.

Print Friendly

June 30, 2016

Financial Network Topology and Women of System: A Dangerous Combination

Filed under: Clearing,Derivatives,Economics,Financial crisis,Politics,Regulation — The Professor @ 7:43 pm

Here’s a nice article by Robert Henderson in the science magazine Nautilus which poses the question: “Can topology prevent the next financial crisis?” My short answer: No.  A longer answer–which I sketch out below–is that a belief that it can is positively dangerous.

The idea behind applying topology to the financial system is that financial firms are interconnected in a network, and these connections can be represented in a network graph that can be studied. At least theoretically, if you model the network formally, you can learn its properties–e.g., how stable is it? will it survive certain shocks?–and perhaps figure out how to make the network better.

Practically, however, this is an illustration of the maxim that a little bit of knowledge is a dangerous thing.

Most network modeling has focused on counterparty credit connections between financial market participants. This research has attempted to quantify these connections and graph the network, and ascertain how the network responds to certain shocks (e.g., the bankruptcy of a particular node), and how a reconfigured network would respond to these shocks.

There are many problems with this. One major problem–which I’ve been on about for years, and which I am quoted about in the Nautilus piece–is that counterparty credit exposure is only one type of many connections in the financial network: liquidity is another source of interconnection. Furthermore, these network models typically ignore the nature of the connections between nodes. In the real world, nodes can be tightly coupled or loosely coupled. The stability features of tightly and loosely connected networks can be very different even if their topologies are identical.

As a practical example, not only does mandatory clearing change the topology of a network, it also changes the tightness of the coupling through the imposition of rigid variation margining. Tighter coupling can change the probability of the failure of connections, and the circumstances under which these failures occur.

Another problem is that models frequently leave out some participants. As another practical example, network models of derivatives markets include the major derivatives counterparties, and find that netting reduces the likelihood of a cascade of defaults within that network. But netting achieves this by redistributing the losses to other parties who are not explicitly modeled. As a result, the model is incomplete, and gives an incomplete understanding of the full effects of netting.

Thus, any network model is inherently a very partial one, and is therefore likely to be a very poor guide to understanding the network in all its complexity.

The limitations of network models of financial markets remind me of the satirical novel Flatland, where the inhabitants of Pointland, Lineland, and Flatland are flummoxed by higher-dimensional objects. A square finds it impossible to conceptualize a sphere, because he only observes the circular section as it passes through his plane. But in financial markets the problem is much greater because the dimensionality is immense, the objects are not regular and unchanging (like spheres) but irregular and constantly changing on many dimensions and time scales (e.g., nodes enter and exit or combine, nodes can expand or contract, and the connections between them change minute to minute).

This means that although network graphs may help us better understand certain aspects of financial markets, they are laughably limited as a guide to policy aimed at reengineering the network.

But frighteningly, the Nautilus article starts out with a story of Janet Yellen comparing a network graph of the uncleared CDS market (analogized to a tangle of yarn) with a much simpler graph of a hypothetical cleared market. Yellen thought it was self-evident that the simple cleared market was superior:

Yellen took issue with her ball of yarn’s tangles. If the CDS network were reconfigured to a hub-and-spoke shape, Yellen said, it would be safer—and this has been, in fact, one thrust of post-crisis financial regulation. The efficiency and simplicity of Kevin Bacon and Lowe’s Hardware is being imposed on global derivative trading.

 

God help us.

Rather than rushing to judgment, a la Janet, I would ask: “why did the network form in this way?” I understand perfectly that there is unlikely to be an invisible hand theorem for networks, whereby the independent and self-interested actions of actors results in a Pareto optimal configuration. There are feedbacks and spillovers and non-linearities. As a result, the concavity that drives the welfare theorems is notably absent. An Olympian economist is sure to identify “market failure,” and be mightily displeased.

But still, there is optimizing behavior going on, and connections are formed and nodes enter and exit and grow and shrink in response to profit signals that are likely to reflect costs and benefits, albeit imperfectly. Before rushing in to change the network, I’d like to understand much better why it came to be the way it is.

We have only rudimentary understanding of how network configurations develop. Yes, models that specify simple rules of interaction between nodes can be simulated to produce networks that differ substantially from random networks. These models can generate features like the small world property. But it is a giant leap to go from that, to understanding something as huge, complex, and dynamic as a financial system. This is especially true given that there are adjustment costs that give rise to hysteresis and path-dependence, as well as shocks that give rise to changes.

Further, let’s say that the Olympian economist Yanet Jellen establishes that the existing network is inefficient according to some criterion (not that I would even be able to specify that criterion, but work with me here). What policy could she adopt that would improve the performance of the network, let alone make it optimal?

The very features–feedbacks, spillovers, non-linearities–that can create suboptimality  also make it virtually impossible to know how any intervention will affect that network, for better or worse, under the myriad possible states in which that network must operate.  Networks are complex and emergent and non-linear. Changes to one part of the network (or changes to the the way that agents who interact to create the network must behave and interact) can have impossible to predict effects throughout the entire network. Small interventions can lead to big changes, but which ones? Who knows? No one can say “if I change X, the network configuration will change to Y.” I would submit that it is impossible even to determine the probability distribution of configurations that arise in response to policy X.

In the language of the Nautilus article, it is delusional to think that simplicity can be “imposed on” a complex system like the financial market. The network has its own emergent logic, which passeth all understanding. The network will respond in a complex way to the command to simplify, and the outcome is unlikely to be the simple one desired by the policymaker.

In natural systems, there are examples where eliminating or adding a single species may have little effect on the network of interactions in the food web. Eliminating one species may just open a niche that is quickly filled by another species that does pretty much the same thing as the species that has disappeared. But eliminating a single species can also lead to a radical change in the food web, and perhaps its complete collapse, due to the very complex interactions between species.

There are similar effects in a financial system. Let’s say that Yanet decides that in the existing network there is too much credit extended between nodes by uncollateralized derivatives contracts: the credit connections could result in cascading failures if one big node goes bankrupt. So she bans such credit. But the credit was performing some function that was individually beneficial for the nodes in the network. Eliminating this one kind of credit creates a niche that other kinds of credit could fill, and profit-motivated agents have the incentive to try to create it, so a substitute fills the vacated niche. The end result: the network doesn’t change much, the amount of credit and its basic features don’t change much, and the performance of the network doesn’t change much.

But it could be that the substitute forms of credit, or the means used to eliminate the disfavored form of credit (e.g., requiring clearing of derivatives), fundamentally change the network in ways that affect its performance, or at least can do so in some states of the world. For example, it make the network more tightly coupled, and therefore more vulnerable to precipitous failure.

The simple fact is that anybody who thinks they know what is going to happen is dangerous, because they are messing with something that is very powerful that they don’t even remotely understand, or understand how it will change in response to meddling.

Hayek famously said “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Tragically, too many (and arguably a large majority of) economists are the very antithesis of what Hayek says that they should be. They imagine themselves to be designers, and believe they know much more than they really do.

Janet Yellen is just one example, a particularly frightening one given that she has considerable power to implement the designs she imagines. Rather than being the Hayekian economist putting the brake on ham-fisted interventions into poorly understood symptoms, she is far closer to Adam Smith’s “Man of System”:

The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamoured with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it. He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might chuse to impress upon it. If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.

When there are Men (or Women!) of System about, and the political system gives them free rein, analytical tools like topology can be positively dangerous. They make some (unjustifiably) wise in their own conceit, and give rise to dreams of Systems that they attempt to implement, when in fact their knowledge is shockingly superficial, and implementing their Systems is likely to create the highest degree of disorder.

Print Friendly

Next Page »

Powered by WordPress