Streetwise Professor

August 23, 2016

Carl Icahn Rails Against the Evils of RIN City

Filed under: Climate Change,Commodities,Economics,Energy,Politics,Regulation — The Professor @ 12:15 pm

Biofuel Renewable Identification Numbers–“RINs”–are back in the news because of a price spike in June and July (which has abated somewhat). This has led refiners to intensify their complaints about the system. The focus of their efforts at present is to shift the compliance obligation from refiners to blenders. Carl Icahn has been quite outspoken on this. Icahn blames everyone, pretty much, including speculators:

“The RIN market is the quintessential example of a ‘rigged’ market where large gas station chains, big oil companies and large speculators are assured to make windfall profits at the expense of small and midsized independent refineries which have been designated the ‘obligated parties’ to deliver RINs,” Icahn wrote.

“As a result, the RIN market has become ‘the mother of all short squeezes,”‘ he added. “It is not too late to fix this problem if the EPA acts quickly.”

Refiners are indeed hurt by renewable fuel mandates, because it reduces the derived demand for the gasoline they produce. The fact that the compliance burden falls on them is largely irrelevant, however. This is analogous to tax-incidence analysis: the total burden of a tax, and the distribution of a tax, doesn’t depend on who formally pays it. In the case of RINs, the total burden of the biofuels mandate and the distribution of that burden through the marketing chain doesn’t depend crucially on whether the compliance obligation falls on refiners, blenders, or your Aunt Sally.

Warning: There will be math!

A few basic equations describing the equilibrium in the gasoline, ethanol, biodiesel and RINs markets will hopefully help structure the analysis*. First consider the case in which the refiners must acquire RINs:

Screen Shot 2016-08-23 at 10.20.03 AM

Equation (1) is the equilibrium in the retail gasoline market. The retail price of gasoline, at the quantity of gasoline consumed, must equal the cost of blendstock (“BOB”) plus the price of the ethanol blended with it. The R superscript on the BOB price reflects that this is the price when refiners must buy a RIN. This equation assumes that one gallon of fuel at the pump is 90 percent BOB, and 10 percent ethanol. (I’m essentially assuming away blending costs and transportation costs, and a competitive blending industry.) The price of a RIN does not appear here because either the blender buys ethanol ex-RIN, or buys it with a RIN and then sells that to a refiner.

Equation (2) is the equilibrium in (an assumed competitive) ethanol market. The price an ethanol producer receives is the price of ethanol plus the price of a RIN (because the buyer of ethanol gets a RIN that it can sell, and hence is willing to pay more than the energy value of ethanol to obtain it). In equilibrium, this price equals the the marginal cost of producing ethanol. Crucially, with a binding biofuels mandate, the quantity of ethanol produced is determined by the blendwall, which is 10 percent of the total quantity sold at the pump.

Equation (3) is equilibrium in the biodiesel market. When the blendwall binds, the mandate is met by meeting the shortfall between mandate and the blendwall by purchasing RINs generated from the production of biodiesel. Thus, the RIN price is driven to the difference between the cost of producing the marginal gallon of biodiesel, and the price of biodiesel necessary to induce consumption of sufficient biodiesel to sop up the excess production stimulated by the need to obtain RINs. In essence, the price of biodiesel plus the cost of a RIN generated by production of biodiesel must equal the marginal cost of producing it. The amount of biodiesel needed is given by the difference between the mandate quantity and the quantity of ethanol consumed at the blendwall. The parameter a is the amount of biofuel per unit of fuel consumed required by the Renewable Fuel Standard.

Equation (4) is equilibrium in the market for blendstock–this is the price refiners get. The price of BOB equals the marginal cost of producing it, plus the cost of obtaining RINs necessary to meet the compliance obligation. The marginal cost of production depends on the quantity of gasoline produced for domestic consumption (which is 90 percent of the retail quantity of fuel purchased, given a 10 percent blendwall). The price of a RIN is multiplied by a because that is the number of RINs refiners must buy per gallon of BOB they sell.

Equation (5) just says that the value of ethanol qua ethanol is driven by the relative octane values between it and BOB.

The exogenous variables here are the demand curve for retail gasoline; the marginal cost of producing ethanol; the marginal cost of producing BOB (which depends on the price of crude, among other things); the marginal cost of biodiesel production; the demand for biodiesel; and the mandated quantity of RINs (and also the location of the blendwall). Given these variables, prices of BOB, ethanol, RINs, and biodiesel will adjust to determine retail consumption and exports.

Now consider the case when the blender pays for the RINs:

Screen Shot 2016-08-23 at 10.20.25 AM

Equation (6) says that the retail price of fuel is the sum of the value of the BOB and ethanol blended to create it, plus the cost of RINs required to meet the standard. The blender must pay for the RINs, and must be compensated by the price of the fuel. Note that the BOB price has a “B” superscript, which indicates that the BOB price may differ when the blender pays for the RIN from the case where the refiner does.

Without exports, retail consumption, ethanol production, biodiesel production, and BOB production will be the same regardless of where the compliance burden falls. Note that all relevant prices are determined by the equilibrium retail quantity. It is straightforward to show that the same retail quantity will clear the market in both situations, as long as:

Screen Shot 2016-08-23 at 10.20.35 AM

That is, when the refiner pays for the RIN, the BOB price will be higher than when the blender does by the cost of the RINs required to meet the mandate.

Intuitively, if the burden is placed on refiners, in equilibrium they will charge a higher price for BOB in order to cover the cost of complying with the mandate. If the burden is placed on blenders, refiners can sell the same quantity at a lower BOB price (because they don’t have to cover the cost of RINs), but blenders have to mark up the fuel by the cost of the RINs to cover their cost of acquiring them. here the analogy with tax incidence analysis is complete, because in essence the RFS is a tax on the consumption of fossil fuel, and the amount of the tax is the cost of a RIN.

This means that retail prices, consumption, production of ethanol, biodiesel and BOB, refiner margins and blender margins are the same regardless of who has the compliance obligation.

The blenders are complete ciphers here. If refiners have the compliance burden, blenders effectively buy RINs from ethanol producers and sell them to refiners. If the blenders have the burden, they buy RINs from ethanol producers and sell them to consumers. Either way, they break even. The marketing chain is just a little more complicated, and there are additional transactions in the RINs market, when refiners shoulder the compliance obligation.

Under either scenario, the producer surplus (profit, crudely speaking) of the refiners is driven by their marginal cost curves and the quantity of gasoline they produce. In the absence of exports, these things will remain the same regardless of where the burden is placed. Thus, Icahn’s rant is totally off-point.

So what explains the intense opposition of refiners to bearing the compliance obligation? One reason may be fixed administrative costs. If there is a fixed cost of compliance, that will not affect any of the prices or quantities, but will reduce the profit of the party with the obligation by the full amount of the fixed cost. This is likely a relevant concern, but the refiners don’t make it centerpiece of their argument, probably because shifting the fixed cost around has no efficiency effects, but purely distributive ones, and purely distributive arguments aren’t politically persuasive. (Redistributive motives are major drivers of attempts to change regulations, but naked cost shifting arguments look self-serving, so rent seekers attempt to dress up their efforts in efficiency arguments: this is one reason why political arguments over regulations are typically so dishonest.) So refiners may feel obliged to come up with some alternative story to justify shifting the administrative cost burden to others.

There may also be differences in variable administrative costs. Fixed administrative costs won’t affect prices or output (unless they are so burdensome as to cause exit), but variable administrative costs will. Further, placing the compliance obligation on those with higher variable administrative costs will lead to a deadweight loss: consumers will pay more, and refiners will get less.

Another reason may be the seen-unseen effect. When refiners bear the compliance burden, the cost of buying RINs is a line item in their income statement. They see directly the cost of the biofuels mandate, and from an accounting perspective they bear that cost, even though from an economic perspective the sharing of the burden between consumers, refiners, and blenders doesn’t depend on where the obligation falls. What they don’t see–in accounting statements anyways–is that the price for their product is higher when the obligation is theirs. If the obligation is shifted to blenders, they won’t see their bottom line rise by the amount they currently spend on RINS, because their top line will fall by the same amount.

My guess is that Icahn looks at the income statements, and mistakes accounting for economics.

Regardless of the true motive for refiners’ discontent, the current compliance setup is not a nefarious conspiracy of integrated producers, blenders, and speculators to screw poor independent refiners. With the exception of administrative cost burdens (which speculators could care less about, since it will not fall on them regardless), shifting the compliance burden will not affect the market prices of RINs or the net of RINs price that refiners get for their output.

With respect to speculation, as I wrote some time ago, the main stimulus to speculation is not where the compliance burden falls (because again, this doesn’t affect anything relevant to those speculating on RINs prices). Instead, one main stimulus is uncertainty about EPA policy–which as I’ve written, can lead to some weird and potentially destabilizing feedback effects. The simple model sheds light on other drivers of speculation–the exogenous variables mentioned above. To consider one example, a fall in crude oil prices reduces the marginal cost of BOB production. All else equal, this encourages retail consumption, which increases the need for RINs generated from biodiesel, which increases the RINs price.

The Renewable Fuels Association has also raised a stink about speculation and the volatility of RINs prices in a recent letter to the CFTC and the EPA. The RFA (acronyms are running wild!) claims that the price rise that began in May cannot be explained by fundamentals, and therefore must have been caused by speculation or manipulation. No theory of manipulation is advanced (corner/squeeze? trade-based? fraud?), making the RFA letter another example of the Clayton Definition of Manipulation: “any practice that doesn’t suit the person speaking at the moment.” Regarding speculation, the RFA notes that supplies of RINs have been increasing. However, as has been shown in academic research (some by me, some by people like Brian Wright)  that inventories of a storable commodity (which a RIN is) can rise along with prices in a variety of circumstances, including a rise in volatility, or an increase in anticipated future demand. (As an example of the latter case, consider what happened in the corn market when the RFS was passed. Corn prices shot up, and inventories increased too, as consumption of corn was deferred to the future to meet the increased future demand for ethanol. The only way of shifting consumption was to reduce current consumption, which required higher prices.)

In a market like RINs, where there is considerable policy uncertainty, and also (as I’ve noted in past posts) complicated two-way feedbacks between prices and policy, the first potential cause is plausible. Further, since a good deal of the uncertainty relates to future policy, the second cause likely operates too, and indeed, these two causes can reinforce one another.

Unlike in the 2013 episode, there have been no breathless (and clueless) NYT articles about Morgan or Goldman or other banks making bank on RIN speculation. Even if they have, that’s not proof of anything nefarious, just an indication that they are better at plumbing the mysteries of EPA policy.

In sum, the recent screeching from Carl Icahn and others about the recent ramp-up in RIN prices is economically inane, and/or unsupported by evidence. Icahn is particularly misguided: RINs are a tax, and the burden of the tax depends not at all on who formally pays the tax. The costs of the tax are passed upstream to consumers and downstream to producers, regardless of whether consumers pay the tax, producers pay the tax, or someone in the middle pays the tax. As for speculation in RINs it is the product of government policy. Obviously, there wouldn’t be speculation in RINs if there aren’t RINs in the first place. But on a deeper level, speculation is rooted in a mandate that does not correspond with the realities of the vast stock of existing internal combustion engines; the EPA’s erratic attempt to reconcile those irreconcilable things; the details of the RFS system (e.g., the ability to meet the ethanol mandate using biodiesel credits); and the normal vicissitudes of the energy supply and demand.  Speculation is largely a creation of government regulation, ironically, so to complain to the government about it (the EPA in particular) is somewhat perverse. But that’s the world we live in now.

* I highly recommend the various analyses of the RINs and ethanol markets in the University of Illinois’ Farm Doc Daily. Here’s one of their posts on the subject, but there are others that can be found by searching the website. Kudos to Scott Irwin and his colleagues.

Print Friendly

August 20, 2016

On Net, This Paper Doesn’t Tell Us Much About What We Need to Know About the Effects of Clearing

Filed under: Clearing,Derivatives,Economics,Financial crisis,Politics,Regulation — The Professor @ 4:26 pm

A recent Office of Financial Research paper by Samim Ghamami and Paul Glasserman asks “Does OTC Derivatives Reform Incentivize Central Clearing?” Their answer is, probably not.

My overarching comment is that the paper is a very precise and detailed answer to maybe not the wrong question, exactly, but very much a subsidiary one. The more pressing questions include: (i) Do we want to favor clearing vs. bilateral? Why? What metric tells us that is the right choice? (The paper takes the answer to this question as given, and given as “yes.”) (ii) How do the different mechanisms affect the allocation of risk, including the allocation of risk outside the K banks that are the sole concern in the paper? (iii) How will the rules affect the scale of derivatives trading (the paper takes positions as given) and the allocation across cleared and bilateral instruments? (iv) Following on (ii) and (iii) will the rules affect risk management by end-users and what is the implication of that for the allocation of risk in the economy?

Item (iv) has received too little attention in the debates over clearing and collateral mandates. To the extent that clearing and collateral mandates make it more expensive for end-users to manage risk, how will the end users respond? Will they adjust capital structures? Investment? The scale of their operations? How will this affect the allocation of risk in the broader economy? How will this affect output and growth?

The paper also largely ignores one of the biggest impediments to central clearing–the leverage ratio.  (This regulation receives on mention in passing.) The requirement that even segregated client margins be treated as assets for the purpose of calculating this ratio (even though the bank does not have a claim on these margins) greatly increases the capital costs associated with clearing, and is leading some banks to exit the clearing business or to charge fees that make it too expensive for some firms to trade cleared derivatives. This brings all the issues in (iv) to the fore, and demonstrates that certain aspects of the massive post-crisis regulatory scheme are not well thought out, and inconsistent.

Of course, the paper also focuses on credit risk, and does not address liquidity risk issues at all. Perhaps this is a push between bilateral vs. cleared in a world where variation margin is required for all derivatives transactions, but still. The main concern about clearing and collateral mandates (including variation margin) is that they can cause huge increases in the demand for liquidity precisely at times when liquidity dries up. Another concern is that collateral supply mechanisms that develop in response to the mandates create new interconnections and new sources of instability in the financial system.

The most disappointing part of the paper is that it focuses on netting economies as the driver of cost differences between bilateral and cleared trading, without recognizing that the effects of netting are distributive. To oversimplify only a little, the implication of the paper is that the choice between cleared and bilateral trading is driven by which alternative redistributes the most risk to those not included in the model.

Viewed from that perspective, things look quite different, don’t they? It doesn’t matter whether the answer to that question is “cleared” or “bilateral”–the result will be that if netting drives the answer, the answer will result in the biggest risk transfer to those not considered in the model (who can include, e.g., unsecured creditors and the taxpayers). This brings home hard the point that these types of analyses (including the predecessor of Ghamami-Glasserman, Zhu-Duffie) are profoundly non-systemic because they don’t identify where in the financial system the risk goes. If anything, they distract attention away from the questions about the systemic risks of clearing and collateral mandates. Recognizing that the choice between cleared and bilateral trading is driven by netting, and that netting redistributes risk, the question should be whether that redistribution is desirable or not. But that question is almost never asked, let alone answered.

One narrower, more technical aspect of the paper bothered me. G-G introduce the concept of a concentration ratio, which they define as the ratio of a firm’s contribution to the default fund to the firm’s value at risk used to determine the sizing of the default fund. They argue that the default fund under a cover two standard (in which the default fund can absorb the loss arising from the simultaneous defaults of the two members with the largest exposures) is undersized if the concentration ratio is less than one.

I can see their point, but its main effect is to show that the cover two standard is not joined up closely with the true determinants of the risk exposure of the default fund. Consider a CCP with N identical members, where N is large: in this case, the concentration ratio is small. Further, assume that member defaults are independent, and occur with probability p. The loss to the default fund conditional on the default of a given member is X. Then, the expected loss of the default fund is pNX, and under cover two, the size of the fund is 2X.  There will be some value of N such that for a larger number of members, the default fund will be inadequate. Since the concentration ratio varies inversely with N, this is consistent with the G-G argument.

But this is a straw man argument, as these assumptions are obviously extreme and unrealistic. The default fund’s exposure is driven by the extreme tail of the joint distribution of member losses. What really matters here is tail dependence, which is devilish hard to measure. Cover two essentially assumes a particular form of tail dependence: if the 1st (2nd) largest exposure defaults, so will the 2nd (1st) largest, but it ignores what happens to the remaining members. The assumption of perfect tail dependence between risks 1 and 2 is conservative: ignoring risks 3 through N is not. Where things come out on balance is impossible to determine. Pace G-G, when N is large ignoring 3-to-N is likely very problematic, but whether this results in an undersized default fund depends on whether this effect is more than offset by the extreme assumption of perfect tail dependence between risks 1 and 2.

Without knowing more about the tail dependence structure, it is impossible to play Goldilocks and say that this default fund is too large,  this default fund is too small, and this one is just right by looking at N (or the concentration ratio) alone. But if we could confidently model the tail dependence, we wouldn’t have to use cover two–and we could also determine individual members’ appropriate contributions more exactly than relying on a pro-rata rule (because we could calculate each member’s marginal contribution to the default fund’s risk).

So cover two is really a confession of our ignorance. A case of sizing the default fund based on what we can measure, rather than what we would like to measure, a la the drunk looking for his keys under the lamppost, because the light is better there. Similarly, the concentration ratio is something that can be measured, and does tell us something about whether the default fund is sized correctly, but it doesn’t tell us very much. It is not a sufficient statistic, and may not even be a very revealing one. And how revealing it is may differ substantially between CCPs, because the tail dependence structures of members may vary across them.

In sum, the G-G paper is very careful, and precisely identifies crucial factors that determine the relative private costs of cleared vs. bilateral trading, and how regulations (e.g., capital requirements) affect these costs. But this is only remotely related to the question that we would like to answer, which is what are the social costs of alternative arrangements? The implicit assumption is that the social costs of clearing are lower, and therefore a regulatory structure which favors bilateral trading is problematic. But this assumes facts not in evidence, and ones that are highly questionable. Further, the paper (inadvertently) points out a troubling reality that should have been more widely recognized long ago (as Mark Roe and I have been arguing for years now): the private benefits of cleared vs. bilateral trading are driven by which offers the greatest netting benefit, which also just so happens to generate the biggest risk transfer to those outside the model. This is a truly systemic effect, but is almost always ignored.

In these models that focus on a subset of the financial system, netting is always a feature. In the financial system at large, it can be a bug. Would that the OFR started to investigate that issue.

Print Friendly

August 5, 2016

Bipartisan Stupidity: Restoring Glass-Steagall

Filed under: Economics,Financial crisis,Financial Crisis II,Politics,Regulation — The Professor @ 6:35 pm

Both parties officially favor a restoration of Glass-Steagall, the Depression-era banking regulation that persisted until repealed under the Clinton administration in 1999. When both Parties agree on an issue, they are likely wrong, and that is the case here.

The homage paid to Glass-Steagall is totem worship, not sound economic policy. The reasoning appears to be that the banking system was relatively quiescent when Glass-Steagall was in place, and a financial crisis occurred within a decade after its repeal. Ergo, we can avoid financial crises by restoring G-S. This makes as much sense as blaming the tumult of the 60s on auto companies’ elimination of tail fins.

Glass-Steagall had several parts, some of which are still in existence. The centerpiece of the legislation was deposit insurance, which rural and small town banking interests had been pushing for years. Deposit insurance is still with us, and its effects are mixed, at best.

One of the parts of Glass-Steagall that was abolished was its limitation on bank groups: the 1933 Act made it more difficult to form holding companies of multiple banks as a way of circumventing branch banking restrictions that were predominant at the time. This was perverse because (1) the Act was ostensibly intended to prevent banking crises, and (2) the proliferation of unit banks due to restrictions on branch banking was one of the most important causes of the banking crisis that ushered in the Great Depression.

The contrast between the experiences of Canada and the United States is illuminating in this regard. Both countries were subjected to a huge adverse economic shock, but Canada’s banking system, which was dominated by a handful of banks that operated branches throughout the country, survived, whereas the fragmented US banking system collapsed. In the 1930s, too big to fail was less of a problem than to small to survive. The collapse of literally thousands of banks devastated the US economy, and this banking crisis ushered in the Depression proper. Further, the inability of branched national banks to diversify liquidity risk (as Canada’s banks were able to do) made the system more dependent on the Fed to manage liquidity shocks. That turned out to be a true systemic risk, when the Fed botched the job (as documented by Friedman and Schwartz). When the system is very dependent on one regulatory body, and that body fails, the effect of the failure is systemic.

The vulnerability of small unit banks was again demonstrated in the S&L fiasco of the 1980s (a crisis in which deposit insurance played a part).

So that part of Glass-Steagall should remain dead and buried.

The part of Glass-Steagall that was repealed, and which its worshippers are most intent on restoring, was the separation of securities underwriting from commercial banking and the limiting of banks securities holdings to investment grade instruments.

Senator Glass believed that the combination of commercial and investment banking contributed to the 1930s banking crisis. As is the case with many legislators, his fervent beliefs were untainted by actual evidence. The story told at the time (and featured in the Pecora Hearings) was that commercial banks unloaded their bad loans into securities, which they dumped on an unsuspecting investing public unaware that they were buying toxic waste.

There are only two problems with this story. First, even if true, it would mean that banks were able to get bad assets off their balance sheets, which should have made them more stable! Real money investors, rather than leveraged institutions were wearing the risk, which should have reduced the likelihood of banking crises.

Second, it wasn’t true. Economists (including Kroszner and Rajan) have shown that securities issued by investment banking arms of commercial banks performed as well as those issued by stand-alone investment banks. This is inconsistent with the asymmetric information story.

Now let’s move forward almost 60 years and try to figure whether the 2008 crisis would have played out much differently had investment banking and commercial banking been kept completely separate. Almost certainly not. First, the institutions in the US that nearly brought down the system were stand alone investment banks, namely Lehman, Bear-Sterns, and Merrill Lynch. The first failed. The second two were absorbed into commercial banks, the first by having the Fed take on most of the bad assets, the second in a shotgun wedding that ironically proved to make the acquiring bank–Bank of America–much weaker. Goldman Sachs and Morgan-Stanley were in dire straits, and converted into banks so that they could avail themselves of Fed support denied them as investment banks.

The investment banking arms of major commercial banks like JP Morgan did not imperil their existence. Citi may be something of an exception, but earlier crises (e.g., the Latin American debt crisis) proved that Citi was perfectly capable of courting insolvency even as a pure commercial bank in the pre-Glass-Steagall repeal days.

Second, and relatedly, because they could not take deposits, and therefore had to rely on short term hot money for funding, the stand-alone investment banks were extremely vulnerable to funding runs, whereas deposits are a “stickier,” more stable source of funding. We need to find ways to reduce reliance on hot funding, rather than encourage it.

Third, Glass-Steagall restrictions weren’t even relevant for several of the institutions that wreaked the most havoc–Fannie, Freddie, and AIG.

Fourth, insofar as the issue of limitations on the permissible investments of commercial banks is concerned, it was precisely investment grade–AAA and AAA plus, in fact–that got banks and investment banks into trouble. Capital rules treated such instruments favorably, and voila!, massive quantities of these instruments were engineered to meet the resulting demand. They way they were engineered, however, made them reservoirs of wrong way risk that contributed significantly to the 2008 doom loop.

In sum: the banking structures that Glass-Steagall outlawed didn’t contribute to the banking crisis that was the law’s genesis, and weren’t materially important in causing the 2008 crisis. Therefore, advocating a return to Glass-Steagall as a crisis prevention mechanism is wholly misguided. Glass-Steagall restrictions are largely irrelevant to preventing financial crises, and some of their effects–notably, the creation of an investment banking industry largely reliant on hot, short term money for funding–actually make crises more likely.

This is why I say that Glass-Steagall has a totemic quality. The reverence shown it is based on a fondness for the old gods who were worshipped during a time of relative economic quiet (even though that is the product of folk belief, because it ignores the LatAm, S&L, and Asian crises, among others, that occurred from 1933-1999). We had a crisis in 2008 because we abandoned the old gods, Glass and Steagall! If we only bring them back to the public square, good times will return! It is not based on a sober evaluation of history, economics,  or the facts.

An alternative tack is taken by Luigi Zingales. He advocates a return to Glass-Steagall in part based on political economy considerations, namely, that it will increase competition and reduce the political power of large financial institutions. As I argued in response to him over four years ago, these arguments are unpersuasive. I would add another point, motivated by reading Calamaris and Haber’s Fragile by Design: the political economy of a fragmented financial system can lead to disastrous results too. Indeed, the 1930s banking crisis was caused largely by the ubiquity of small unit banks and the failure of the Fed to provide liquidity in such a system that was uniquely dependent on this support. Those small banks, as Calomaris and Haber show, used their political power to stymie the development of national branched banks that would have improved systemic stability. The S&L crisis was also stoked by the political power of many small thrifts.*

But regardless, both the Republican and Democratic Parties have now embraced the idea. I don’t sense a zeal in Congress to do so, so perhaps the agreement of the Parties’ platforms on this issue will not result in a restoration of Glass-Steagall. Nonetheless, the widespread fondness for the 83 year old Act should give pause to those who look to national politicians to adopt wise economic policies. That fondness is grounded in a variety of religious belief, not reality.

*My reading of Calomaris and Haber leads me to the depressing conclusion that the political economy of banking is almost uniformly dysfunctional, at all times and at all places. In part this is because the state looks upon the banking system to facilitate fiscal objectives. In part it is because politicians have viewed the banking system as an indirect way of supporting favored domestic constituencies when direct transfers to these constituencies are either politically impossible or constitutionally barred. In part it is because bankers exploit this symbiotic relationship to get political favors: subsidies, restrictions on competition, etc. Even the apparent successes of banking legislation and regulation are more the result of unique political conditions rather than economically enlightened legislators. Canada’s banking system, for instance, was not the product of uniquely Canadian economic insight and political rectitude. Instead, it was the result of a political bargain that was driven by uniquely Canadian political factors, most notably the deep divide between English and French Canada. It was a venal and cynical political deal that just happened to have some favorable economic consequences which were not intended and indeed were not necessarily even understood or foreseen by those who drafted the laws.

Viewed in this light, it is not surprising that the housing finance system in the US, which was the primary culprit for the 2008 crisis, has not been altered substantially. It was the product of a particular set of political coalitions that still largely exist.

The history of federal and state banking regulation in the US also should give pause to those who think a minimalist state in a federal system can’t do much harm. Banking regulation in the small government era was hardly ideal.

Print Friendly

July 30, 2016

Say “Sayonara” to Destination Clauses, and “Konnichiwa” to LNG Trading

Filed under: Commodities,Derivatives,Economics,Energy,Politics,Regulation — The Professor @ 11:12 am

The LNG market is undergoing a dramatic change: a couple of years ago, I characterized it as “racing to an inflection point.” The gas glut that has resulted from slow demand growth and the activation of major Australian and US production capacity has not just weighed on prices, but has undermined the contractual structures that underpinned the industry from its beginnings in the mid-1960s: oil linked pricing in long term contracts; take-or-pay arrangements; and destination clauses. Oil linkage was akin to the drunk looking for his keys under the lamppost: the light was good there, but in recent years in particular oil and gas prices have become de-linked, meaning that the light shines in the wrong place. Take-or-pay clauses make sense as a way of addressing opportunism problems that arise in the presence of long-lived, specific assets, but the development of a more liquid short-term trading market reduces asset specificity. Destination clauses were a way that sellers with market power could support price discrimination (by preventing low-price buyers from reselling to those willing to pay higher prices), but the proliferation of new sellers has undermined that market power.

Furthermore, the glut of gas has undermined seller market and bargaining power, and buyers are looking to renegotiate deals done when market conditions were different. They are enlisting the help of regulators, and in Japan (the largest LNG purchaser), their call is being answered. Japan’s antitrust authorities are investigating whether the destination clauses violate fair trade laws, and the likely outcome is that these clauses will be retroactively eliminated, or that sellers will “voluntarily” remove them to preempt antitrust action.

It’s not as if the economics of these clauses have changed overnight: it’s that the changes in market fundamentals have also affected the political economy that drives antitrust enforcement. As contract and spot prices have diverged, and as the pattern of gas consumption and production has diverged from what existed at the time the contracts were formed, the deadweight costs of the clauses have increased, and these costs have fallen heavily on buyers. In a classic illustration of Peltzman-Becker-Stigler theories of regulation, regulators are responding to these efficiency and distributive changes by intervening to challenge contracts that they didn’t object to when conditions were different.

This development will accelerate the process that I wrote about in 2014. More cargoes will be looking for new homes, because the original buyers overbought, and this reallocation will spur short-term trading. This exogenous shock to short term trading will increase market liquidity and the reliability of short term/spot prices, which will spur more short term trading and hasten the demise of oil linking. The virtuous liquidity cycle was already underway as a result of the gas glut, and the emergence of the US as a supplier, but the elimination of destination clauses in legacy Japanese contracts will provide a huge boost to this cycle.

The LNG market may never look exactly like the oil market, but it is becoming more similar all the time. The intervention of Japanese regulators to strike down another barbarous relic of an earlier age will only expedite that process, and substantially so.

Print Friendly

July 23, 2016

The Medium is NOT the Message: Hillary’s Scheming Is

Filed under: Politics,Regulation — The Professor @ 12:35 pm

Wikileaks released over 20,000 documents from the Democratic National Committee. As one would expect when such a rock is turned over, this exposed a lot of disgusting wriggling creatures.

Yes, there is a lot of traffic regarding Trump. But the most damning material relates to the fact that the DNC was/is in the tank for Hillary, and schemed continuously and extensively to undermine Bernie Sanders.

The corruption of Hillary and the DNC is hardly surprising. It is her–and their–DNA. But it is illuminating to actually witness evidence of the machinations of this crowd.

One of the more fascinating aspects of this is the reaction of those who are at pains to ignore the content of the emails, and focus on Russia’s supposed  responsibility for the leak. Just a cursory scan of Twitter and the Internet revealed a disparate and rather motley cast of characters pushing this story, including John Schindler (status of pants unknown), BuzzFeed’s Miriam Elder, neocon thinktanker James Kirchick, and Gawker.

To some it is axiomatic. Wikileaks=Russia. At least Kirchick felt obligated to come up with a more elaborate theory. Putin wants Trump to win, and the leaked emails will enrage the Bernie supporters who are also Wikileaks and RT afficionados. These disaffected Berners will either not vote or will go to Trump.

Whatever. In these situations, ALWAYS use Occam’s Razor, and that cuts against such a baroque theory. The far more parsimonious explanation is that an outraged Bernie supporter in the DNC (you don’t think there are Feel the Berners working as IT geeks at DNC?), or an outraged Bernie supporter with hacking skillz, did it. Come on. Look around. A lot of hardcore lefties are outraged at Hillary’s and the DNC’s underhanded and dirty treatment of their guy. That’s a much more straightforward explanation than Putin Did It!

There are other things that cut against the Putin theory. The reflexive attribution of Russian control to anything coming out of Wikileaks undermines the impact of the leak. If the Russians want to hurt Hillary, they would want to use an outlet that is not widely associated with them, if only to deprive Hillary and her flying monkeys and her tribe of acolytes of a way to discredit the leak–which is exactly what they are doing. The Russians aren’t stupid. They wouldn’t rely on an outlet that could be discredited precisely because of its alleged connection to them when there are many other ways of releasing the information. It would be in their interest to use a cutout that is not associated with them.

Further, if Russian hacking is so powerful (and I agree that it is), the DNC emails would not be the most damaging material. Hillary’s server material and Clinton Foundation emails would be far more damning.

As for Schindler’s argument that (unproven and implausible) Russian interference in US elections is beyond the pale: even if Russia is involved, influence by revealing facts is a different thing altogether from attempts to influence by manipulation, lies, disinformation, propaganda, or coercion. What the leak reveals is that the DNC actively manipulated the US primary elections in order to benefit Hillary: that kind of influence is more malign than influencing by making that fact known. Keeping the DNC’s and Hillary’s machinations secret would also influence upcoming presidential election. It’s better that our elections are influenced my more facts rather than less, and to argue that these facts should be ignored because of their (alleged) provenance is to commit two logical fallacies: ad hominem argument (reasoning/facts are judged based on the source) and appeal to motive (arguments/facts are to be judged based not on their logic/truth, but the motive of the party making the argument/presenting the facts).

The irony-and hypocrisy-of those rushing to pin this on Russia in order to distract attention from the content is also remarkable. Some (like Miriam Elder) have been big Wikileaks and Bradley Manning supporters in the past. Funny how alleged Russian manipulation of Wikileaks escaped their attention when Assange was leaking things that hurt their political opponents, but all of a sudden becomes THE STORY when one of theirs is targeted.

But the irony and hypocrisy don’t stop there. The DNC emails reveal that it used Russian tactics that today’s critics of the DNC data dump have assailed in the past: paying people to troll political opponents and their supporters on Twitter and elsewhere, and using employees to participate in Astroturf “demonstrations.”

And there’s more! The Attack the Messenger strategy is exactly the one that the Kremlin has employed in response to leaks about it. Putin’s spokesman Peskov tried to discredit the Panama Papers by claiming that they were a CIA information operation. Those attacking Wikileaks today went ballistic. How are they any different?

No. The medium is not the message, and attempts to make it so are discreditable and fallacious ways to distract attention from the real message in the DNC emails: namely, that the party, and its standard bearer, are corrupt, unethical slugs who have rigged the nomination process to save a wretched candidate who couldn’t win fair-and-square despite her huge advantages. Regardless of who turned over the rock to reveal that, it’s a good thing that the world can see them for what they are.

 

 

Print Friendly

For All You Pigeons: Musk Has Announced Master Plan II

Filed under: Climate Change,Commodities,Economics,Energy,Politics,Regulation — The Professor @ 11:29 am

Elon Musk just announced his “Master Plan, Part Deux,” AKA boob bait for geeks and posers.

It is just more visionary gasbaggery, and comes at a time when Musk is facing significant head winds: there is a connection here. What headwinds? The proposed Tesla acquisition of SolarCity was not greeted, shall we say, with universal and rapturous applause. To the contrary, the reaction was overwhelmingly negative, sometimes extremely so (present company included)–but the proposed tie up gave even some fanboyz cause to pause. Production problems continue; Tesla ended the resale price guarantee on the Model S (which strongly suggests financial strains); and the company has cut the price on the Model X SUV in the face of lackluster sales. But the biggest set back was the death of a Tesla driver while he was using the “Autopilot” feature, and the SEC’s announcement of an investigation of whether Tesla violated disclosure regulations by keeping the accident quiet until after it had completed its $1.6 billion secondary offering.

It is not a coincidence, comrades, that Musk tweeted that he was thinking of announcing his new “Master Plan” a few hours before the SEC made its announcement. Like all good con artists, Musk needed to distract from the impending bad news.

And that’s the reason for Master Plan II overall. All cons eventually produce cognitive dissonance in the pigeons, when reality clashes with the grandiose promises that the con man had made before. The typical way that the con artist responds is to entrance the pigeons with even more grandiose promises of future glory and riches. If that’s not what Elon is doing here, he’s giving a damn good impression of it.

All I can say is that if you are fool enough to fall for this, you deserve to be suckered, and look elsewhere for sympathy. Look here, and expect this.

As for the “Master Plan” itself, it makes plain that Musk fails to understand some fundamental economic principles that have been recognized since Adam Smith: specialization, division of labor, and gains from trade among specialists, most notably. A guy whose company cannot deliver on crucial aspects of Master Plan I, which Musk says “wasn’t all that complicated,” (most notably, production issues in a narrow line of vehicles), now says that his company will produce every type of vehicle. A guy whose promises about self-driving technology are under tremendous scrutiny promises vast fleets of autonomous vehicles. A guy whose company burns cash like crazy and which is now currently under serious financial strain (with indications that its current capital plans are unaffordable) provides no detail on how this grandiose expansion is going to be financed.

Further, Musk provides no reason to believe that even if each of the pieces of his vision for electric automobiles and autonomous vehicles is eventually realized, that it is efficient for a single company to do all of it. The purported production synergies between electricity generation (via solar), storage, and consumption (in the form of electric automobiles) are particularly unpersuasive.

But reality and economics aren’t the point. Keeping the pigeons’ dreams alive and fighting cognitive dissonance are.

Insofar as the SEC investigation goes, although my initial inclination was to say “it’s about time!” But the Autopilot accident silence is the least of Musk’s disclosure sins. He has a habit of making forward looking statements on Twitter and elsewhere that almost never pan out. The company’s accounting is a nightmare. I cannot think of another CEO who could get away with, and has gotten away with, such conduct in the past without attracting intense SEC scrutiny.

But Elon is a government golden boy, isn’t he? My interest in him started because he was–and is–a master rent seeker who is the beneficiary of massive government largesse (without which Tesla and SolarCity would have cratered long ago). In many ways, governments–notably the US government and the State of California–are his biggest pigeons.

And rather than ending, the government gravy train reckons to continue. Last week the White House announced that the government will provide $4.5 billion in loan guarantees for investments in electric vehicle charging stations. (If you can read the first paragraph of that statement without puking, you have a stronger stomach than I.) Now Tesla will not be the only beneficiary of this–it is a subsidy to all companies with electric vehicle plans–but it is one of the largest, and one of the neediest. One of Elon’s faded promises was to create a vast network of charging stations stretching from sea-to-sea. Per usual, the plan was announced with great fanfare, but the delivery has not met the plans. Also per usual, it takes forensic sleuthing worthy of Sherlock Holmes to figure out exactly how many stations have been rolled out and are in the works.

The rapid spread of the evil internal combustion engine was not impeded by a lack of gas stations: even in a much more primitive economy and a much more primitive financial system, gasoline retailing and wholesaling grew in parallel with the production of autos without government subsidy or central planning. Oil companies saw a profitable investment opportunity, and jumped on it.

Further, even if one argues that there are coordination problems and externalities that are impeding the expansion of charging networks (which I seriously doubt, but entertain to show that this does not necessitate subsidies), these can be addressed by private contract without subsidy. For instance, electric car producers can create a joint venture to invest in power stations. To the extent government has a role, it would be to take a rational approach to the antitrust aspects of such a venture.

So yet again, governments help enable Elon’s con. How long can it go on? With the support of government, and credulous investors, quite a while. But cracks are beginning to show, and it is precisely to paper over those cracks that Musk announced his new Master Plan.

Print Friendly

July 17, 2016

Antitrust to Attack Inequality? Fuggedaboutit: It’s Not Where the Money Is

Filed under: Economics,Politics,Regulation — The Professor @ 12:09 pm

There is a boomlet in economics and legal scholarship suggesting that increased market power has contributed to income inequality, and that this can be addressed through more aggressive antitrust enforcement. I find the diagnosis less than compelling, and the proposed treatment even less so.

A recent report by the President’s Council of Economic Advisors lays out a case that there is more concentration in the US economy, and insinuates that this has led to greater market power. The broad statistic cited in the report is the increase in the share of revenue earned by the top 50 firms in broad industry segments. This is almost comical. Fifty firms? Really? Also, a Herfindahl-Hirschman Index would be more appropriate. Furthermore, the industry sectors are broad  and correspond not at all to relevant markets–which is the appropriate standard (and the one embedded in antitrust law) for evaluating concentration and competition.

The report then mentions a few specific industries, namely hospitals and wireless carriers, in which HHIs have increased. Looking at a few industries is hardly a systematic approach.

Airlines is another industry that is widely cited as experiencing greater concentration, and which prices have increased with concentration. Given the facts that a major driver of concentration has been the bankruptcy or financial distress of major carriers, and that the industry’s distinctive cost characteristics (namely huge operational leverage and network structure) create substantial scale and network economies, it’s not at all clear whether the previous lower prices were long run equilibrium prices. So some of the price increases may result in super competitive prices, but some may just reflect that prices before were unsustainably low.

Looking over the discussion of these issues gives me flashbacks. There is a paleo industrial organization (“PalIO”?) feel to it. It harkens back to the ancient Structure-Conduct-Performance paradigm that was a thing in the 50s-70s. Implicit in the current discussion is the old SCP (LOL–that’s the closest I come to being associated with this view) idea that there is a causal connection between industry structure and market power. More concentrated markets are less competitive, and firms in such more concentrated, less competitive markets are more profitable. Those arguing that greater concentration increases income inequality go from this belief to their conclusion by claiming that the increased market power rents flow disproportionately to higher income/wealth individuals.

The PalIO view was challenged, and largely demolished, in the 70s and 80s, primarily by the Chicago School, which demonstrated alternative non-market power mechanisms that could give rise to correlations (in the cross-section and time series) between concentration and profitability. For instance, firms experiencing favorable “technology” shocks (which could encompass product or process innovations, organizational innovations, or superior management) will expand at the expense of firms not experiencing such shocks, and will be infra marginal and more profitable.

This alternative view forces one to ask why concentration has changed. Implicit in the position of those advocating more aggressive antitrust enforcement is the belief that firms have merged to exploit market power, and that lax antitrust enforcement has facilitated this.

But there are plausibly very different drivers of increased concentration. One is network and information effects, which tend to create economies of scale and result in larger firms and more concentrated markets. Yes, these effects may also give the dominant firms that benefit from the network/information economies market power, and they may charge super competitive prices, but these kinds of industries and firms pose thorny challenges to antitrust. First, since monopolization per se is not an antitrust violation, a Google can become dominant without merger or without collusion, leaving antitrust authorities to nip at the margins (e.g., attacking alleged favoritism in searches). Second, conventional antitrust remedies, such as breaking up dominant firms, may reduce market power, but sacrifice scale efficiencies: this is especially likely to be true in network/information industries.

The CEA report provides some indirect evidence of this. It notes that the distribution of firm profits has become notably more skewed in recent years. If you look at the chart, you will notice that the return on invested capital excluding goodwill for the 90th percentile of firms shot up starting in the late-90s. This is exactly the time the Internet economy took off. This resulted in the rise of some dominant firms with relatively low investments in physical capital. More concentration, more profitability, but driven by a technological shock rather than merger for monopoly.

Another plausible driver of increased concentration in some markets is regulation. Hospitals are often cited as examples of how lax merger policy has led to increased concentration and increased prices. But given the dominant role of the government as a purchaser of hospital services and a regulator of medical markets, whether merger is in part an economizing response to dealing with a dominant customer deserves some attention.

Another industry that has become more concentrated is banking. The implicit and explicit government support for too big to fail enterprises has obviously played a role in this. Furthermore, extensive government regulation of banking, especially post-Crisis, imposes substantial fixed costs on banks. These fixed costs create scale economies that lead to greater scale and concentration. Further, regulation can also serve as an entry barrier.

The fixed-cost-of-regulation (interpreted broadly as the cost of responding to government intervention) is a ubiquitous phenomenon. No discussion of the rise of concentration should be complete without it. But it largely is, despite the fact that it has long been known that rent seeking firms secure regulations for their private benefit, and to the detriment of competition.

The CEA study mentions increased concentration in the railroad industry since the mid-80s. But this is another industry that is subject to substantial network economies, and the rise in concentration from that date in particular reflects an artifact of regulation: before the Staggers Act deregulated rail in 1980, that industry was inefficiently fragmented due to regulation. It was also a financial basket case. Much of the increased concentration reflects an efficiency-enhancing rationalization of an industry that was almost wrecked by regulation. Some segments of the rail market have likely seen increased market power, but most segments are subject to competition from non-rail transport (e.g., trucking, ocean shipping, or even pipelines that permit natural gas to compete with coal).

Another example of how regulation can increase concentration and reduce concentration in relevant markets: EPA regulations of gasoline. The intricate regional and seasonal variations in gasoline blend standards means that there is not a single market for gasoline in the United States: fuel that meets EPA standards for one market at one time of year can’t be supplied to another market at another time because it doesn’t meet the requirements there and then. This creates balkanized refinery markets, which given the large scale economies of refining, tend to be highly concentrated.

Reviewing this makes plain that as in so many things, what we are seeing in the advocacy of more aggressive antitrust is the prescription of treatments based on a woefully incomplete understanding of causes.

There is also an element of political trendiness here. Inequality is a major subject of debate at present, and everyone has their favorite diagnosis and preferred treatment. This has an element of using the focus on inequality to advance other agendas.

Even if one grants the underlying PalIO concentration-monopoly profit premise, however, antitrust is likely to be an extremely ineffectual means of reducing income inequality.

For one thing, there is no good evidence on how market power rents are distributed. The presumption is that they go to CEOs and shareholders. The evidence behind the first presumption is weak, at best, and some evidence cuts the other way. Moreover, it is also the case that some market power rents are not distributed to shareholders, but accrue to other stakeholders within firms, including labor.

Moreover, the numbers just don’t work out. In 2015, after-tax corporate income represented only about 10 percent of US national income. Market power rents represented only a fraction of those corporate profits. Market power rents that could be affected by more rigorous antitrust enforcement represented only a fraction–and likely a small fraction–of total corporate profits. If we are talking about 1 percent of US income the distribution of which could be affected by antitrust enforcement, I would be amazed. I wouldn’t be surprised if its an order of magnitude less than that.

With respect to how much of corporate income could be affected by antitrust policy, it’s worthwhile to consider a point mentioned earlier, and which the CEA raised: the distribution of corporate profits is very skewed. Further, if you look at the data more closely, very little of the big corporate profits could be affected by more rigorous antitrust–in particular, more aggressive approaches to mergers.

In 2015, 28 firms earned 50 percent of the earnings of all S&P500 firms. Apple alone earned 6.7 percent of the collective earnings of the S&P500. Many of the other firms represented in this list (Google, Microsoft, Oracle, Intel) are firms that have grown from network effects or intellectual capital rather than through merger for market power. They became big in sectors where the competitive process favors winner-take-most. It’s also hard to see how antitrust matters for other firms, Walt Disney for instance.

Only three industries have multiple firms on the list. Banking is one, and I’ve already discussed that: yes, it has grown through merger, but regulation and government are major drivers of that. There have also efficiency gains from consolidating an industry that regulation historically made horrifically inefficiently fragmented, though where current scale is relative to efficient scale is a matter of intense debate.

Another is airlines. Again, given the route network-driven scale economies, and the previous financial travails of the industry, it’s not clear how much market power rents the industry is generating, and whether antitrust could reduce those rents without imposing substantial inefficiencies.

Automobiles is on the list. But the automobile industry is now far less concentrated than it used to be in the days of the Big Three, and highly competitive.  Oil is represented on the list by one company: ExxonMobil. Crude and gas production is not highly concentrated, when one looks at the relevant market–which is the world. This is another industry which has seen a decline in dominance by major firms over the years.

Looking over this list, it is difficult to find large dollars that could even potentially be redistributed via antitrust. And given that this list represents a very large fraction of corporate profits, the potential impact of antitrust on income distribution is likely to be trivial.

(As an exercise for interested readers: calculate industry profits by a fairly granular level of disaggregation by NAICS code, and see which ones have become more concentrated as a result of merger in recent years.)

In sum, if you want to ameliorate inequality, I would put antitrust on the bottom of your list. It’s not where the money is because the kind of market power that antitrust could even conceivably address accounts for a  small portion of profits, which in turn account for a modest percentage of national income. Market power changes in many profitable industries have almost certainly been driven by major technological changes, and antitrust could reduce them only by gutting the efficiency gains produced by these changes.

 

Print Friendly

July 6, 2016

Brexit: Breaking the Cartel of Nations. Could Position Limits Be a Harbinger?

Filed under: Clearing,Commodities,Derivatives,Economics,Politics,Regulation — The Professor @ 7:50 pm

One of the ideas that I floated in my first post-Brexit post was that freed from some of the EU’s zanier regulations, it could compete by offering a saner regulatory environment. One of the specific examples I gave was position limits, for as bad as the US position limit proposal is, it pales in comparison to the awfulness of the EU version. And lo and behold! Position limits are first on the list of things to be trimmed, and the FCA appears to be on board with this:

Britain-based commodity exchanges may have some leeway in the way they manage large positions after the UK exits the European Union, but they will still have to comply with EU rules from 2018, experts say.

Position limits, a way of controlling how much of an individual commodity trading firms can hold, are being introduced for the first time in the Markets in Financial Instruments Directive II (MiFID II) from January 2018.

Britain voted to leave the EU last month, but its exit has to be negotiated with the remaining 27 members, a process that is meant to be completed within two years of triggering a formal legal process.

“It is too early to say what any new UK regime will look like particularly given pressure for equivalence,” James Maycock, a director at KPMG, said, referring to companies having to prove that rules in their home countries are equivalent to those in the EU.

“But UK commodity trading venues may have more flexibility in setting position limits if they are not subject to MiFID II.”

. . . .

Britain’s Financial Conduct Authority (FCA) said in a statement after the Brexit vote that firms should continue to prepare for EU rules. But it has previously expressed doubts about position limits on all commodity contracts.

“We do not believe that it is necessary, as MiFID II requires, to have position limits for every single one of the hundreds of commodity derivatives contracts traded in Europe. Including the least significant,” said Tracey McDermott, former acting chief executive at the FCA in February this year.

“And I know there are concerns, frankly, that the practical details of position reporting were not adequately thought through in the negotiations on the framework legislation.”

Here’s hoping.

This could explain a major driver behind the Eurogarchs intense umbrage at Brexit. Competition from the UK, particularly in the financial sector, will provide a serious brake on some of the EU’s more dirigiste endeavors. This is especially true in financial/capital markets because capital is extremely mobile. Further, I conjecture that Europe needs The City more than The City needs Europe. Hollande and others in Europe are talking about walling off the EU’s financial markets from perfidious Albion, but the most likely outcome of this is to create a continental financial ghetto or gulag, A Prison of Banks.

If financial protectionism of the type Hollande et al dream of could work, French, German and Dutch bankers should be dancing jigs right now. But they seem to be the most despondent and outraged at Brexit.

A (somewhat tangential) remark. Another reason for taking umbrage is that the UK has served as a safety valve for European workers looking to escape the dysfunctional continental labor markets. This is especially true for many younger, high skill/high education French, Germans, etc. (especially the French). With the safety valve cut off, there will be more angry people putting pressure on European governments.

This could be a good thing, if it forces the Euros (especially the French) to loosen up their growth-and-employment-sapping labor laws. But in the short to medium term, it means more political ferment, which the Euro elite doesn’t like one bit.

This all leads to a broader point. Cooperation is a double edged sword. The EU’s main selling point is that intra-European cooperation has led to a reduction in trade barriers that has increased competition in European goods markets. But the EU has also functioned as a Cartel of Nations that has restricted competition on many dimensions.

I note that one major international cooperative effort spearheaded by the Europeans is the attempt to reduce and perhaps eliminate competition between nations on tax. “Tax harmonization” sounds so Zen, but it really means cutting off any means of escape from the depredations of the state. But tax is just one area where governments don’t like to compete with one another. Much regulatory harmonization and coordination and imposed uniformity is intended to reduce inter-state competition that limits the ability of governments to redistribute rents.

This is one reason to believe that Britain’s exit will have some big upsides, not just for the UK but for Europe generally. It will invigorate competition between jurisdictions that statists hate. And it is precisely these upsides which send the dirigistes into paroxysms of anger and despair. Feel their pain, and rejoice in it.

 

Print Friendly

June 30, 2016

Financial Network Topology and Women of System: A Dangerous Combination

Filed under: Clearing,Derivatives,Economics,Financial crisis,Politics,Regulation — The Professor @ 7:43 pm

Here’s a nice article by Robert Henderson in the science magazine Nautilus which poses the question: “Can topology prevent the next financial crisis?” My short answer: No.  A longer answer–which I sketch out below–is that a belief that it can is positively dangerous.

The idea behind applying topology to the financial system is that financial firms are interconnected in a network, and these connections can be represented in a network graph that can be studied. At least theoretically, if you model the network formally, you can learn its properties–e.g., how stable is it? will it survive certain shocks?–and perhaps figure out how to make the network better.

Practically, however, this is an illustration of the maxim that a little bit of knowledge is a dangerous thing.

Most network modeling has focused on counterparty credit connections between financial market participants. This research has attempted to quantify these connections and graph the network, and ascertain how the network responds to certain shocks (e.g., the bankruptcy of a particular node), and how a reconfigured network would respond to these shocks.

There are many problems with this. One major problem–which I’ve been on about for years, and which I am quoted about in the Nautilus piece–is that counterparty credit exposure is only one type of many connections in the financial network: liquidity is another source of interconnection. Furthermore, these network models typically ignore the nature of the connections between nodes. In the real world, nodes can be tightly coupled or loosely coupled. The stability features of tightly and loosely connected networks can be very different even if their topologies are identical.

As a practical example, not only does mandatory clearing change the topology of a network, it also changes the tightness of the coupling through the imposition of rigid variation margining. Tighter coupling can change the probability of the failure of connections, and the circumstances under which these failures occur.

Another problem is that models frequently leave out some participants. As another practical example, network models of derivatives markets include the major derivatives counterparties, and find that netting reduces the likelihood of a cascade of defaults within that network. But netting achieves this by redistributing the losses to other parties who are not explicitly modeled. As a result, the model is incomplete, and gives an incomplete understanding of the full effects of netting.

Thus, any network model is inherently a very partial one, and is therefore likely to be a very poor guide to understanding the network in all its complexity.

The limitations of network models of financial markets remind me of the satirical novel Flatland, where the inhabitants of Pointland, Lineland, and Flatland are flummoxed by higher-dimensional objects. A square finds it impossible to conceptualize a sphere, because he only observes the circular section as it passes through his plane. But in financial markets the problem is much greater because the dimensionality is immense, the objects are not regular and unchanging (like spheres) but irregular and constantly changing on many dimensions and time scales (e.g., nodes enter and exit or combine, nodes can expand or contract, and the connections between them change minute to minute).

This means that although network graphs may help us better understand certain aspects of financial markets, they are laughably limited as a guide to policy aimed at reengineering the network.

But frighteningly, the Nautilus article starts out with a story of Janet Yellen comparing a network graph of the uncleared CDS market (analogized to a tangle of yarn) with a much simpler graph of a hypothetical cleared market. Yellen thought it was self-evident that the simple cleared market was superior:

Yellen took issue with her ball of yarn’s tangles. If the CDS network were reconfigured to a hub-and-spoke shape, Yellen said, it would be safer—and this has been, in fact, one thrust of post-crisis financial regulation. The efficiency and simplicity of Kevin Bacon and Lowe’s Hardware is being imposed on global derivative trading.

 

God help us.

Rather than rushing to judgment, a la Janet, I would ask: “why did the network form in this way?” I understand perfectly that there is unlikely to be an invisible hand theorem for networks, whereby the independent and self-interested actions of actors results in a Pareto optimal configuration. There are feedbacks and spillovers and non-linearities. As a result, the concavity that drives the welfare theorems is notably absent. An Olympian economist is sure to identify “market failure,” and be mightily displeased.

But still, there is optimizing behavior going on, and connections are formed and nodes enter and exit and grow and shrink in response to profit signals that are likely to reflect costs and benefits, albeit imperfectly. Before rushing in to change the network, I’d like to understand much better why it came to be the way it is.

We have only rudimentary understanding of how network configurations develop. Yes, models that specify simple rules of interaction between nodes can be simulated to produce networks that differ substantially from random networks. These models can generate features like the small world property. But it is a giant leap to go from that, to understanding something as huge, complex, and dynamic as a financial system. This is especially true given that there are adjustment costs that give rise to hysteresis and path-dependence, as well as shocks that give rise to changes.

Further, let’s say that the Olympian economist Yanet Jellen establishes that the existing network is inefficient according to some criterion (not that I would even be able to specify that criterion, but work with me here). What policy could she adopt that would improve the performance of the network, let alone make it optimal?

The very features–feedbacks, spillovers, non-linearities–that can create suboptimality  also make it virtually impossible to know how any intervention will affect that network, for better or worse, under the myriad possible states in which that network must operate.  Networks are complex and emergent and non-linear. Changes to one part of the network (or changes to the the way that agents who interact to create the network must behave and interact) can have impossible to predict effects throughout the entire network. Small interventions can lead to big changes, but which ones? Who knows? No one can say “if I change X, the network configuration will change to Y.” I would submit that it is impossible even to determine the probability distribution of configurations that arise in response to policy X.

In the language of the Nautilus article, it is delusional to think that simplicity can be “imposed on” a complex system like the financial market. The network has its own emergent logic, which passeth all understanding. The network will respond in a complex way to the command to simplify, and the outcome is unlikely to be the simple one desired by the policymaker.

In natural systems, there are examples where eliminating or adding a single species may have little effect on the network of interactions in the food web. Eliminating one species may just open a niche that is quickly filled by another species that does pretty much the same thing as the species that has disappeared. But eliminating a single species can also lead to a radical change in the food web, and perhaps its complete collapse, due to the very complex interactions between species.

There are similar effects in a financial system. Let’s say that Yanet decides that in the existing network there is too much credit extended between nodes by uncollateralized derivatives contracts: the credit connections could result in cascading failures if one big node goes bankrupt. So she bans such credit. But the credit was performing some function that was individually beneficial for the nodes in the network. Eliminating this one kind of credit creates a niche that other kinds of credit could fill, and profit-motivated agents have the incentive to try to create it, so a substitute fills the vacated niche. The end result: the network doesn’t change much, the amount of credit and its basic features don’t change much, and the performance of the network doesn’t change much.

But it could be that the substitute forms of credit, or the means used to eliminate the disfavored form of credit (e.g., requiring clearing of derivatives), fundamentally change the network in ways that affect its performance, or at least can do so in some states of the world. For example, it make the network more tightly coupled, and therefore more vulnerable to precipitous failure.

The simple fact is that anybody who thinks they know what is going to happen is dangerous, because they are messing with something that is very powerful that they don’t even remotely understand, or understand how it will change in response to meddling.

Hayek famously said “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Tragically, too many (and arguably a large majority of) economists are the very antithesis of what Hayek says that they should be. They imagine themselves to be designers, and believe they know much more than they really do.

Janet Yellen is just one example, a particularly frightening one given that she has considerable power to implement the designs she imagines. Rather than being the Hayekian economist putting the brake on ham-fisted interventions into poorly understood symptoms, she is far closer to Adam Smith’s “Man of System”:

The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamoured with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it. He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might chuse to impress upon it. If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.

When there are Men (or Women!) of System about, and the political system gives them free rein, analytical tools like topology can be positively dangerous. They make some (unjustifiably) wise in their own conceit, and give rise to dreams of Systems that they attempt to implement, when in fact their knowledge is shockingly superficial, and implementing their Systems is likely to create the highest degree of disorder.

Print Friendly

June 29, 2016

Will the EU Cut Off Its Nose to Spite Its Face on Clearing, Banking & Finance?

Filed under: Clearing,Commodities,Derivatives,Economics,Exchanges,Politics,Regulation — The Professor @ 7:45 pm

French President Francois Hollande is demanding that clearing of Euro derivatives take place in the Eurozone. Last year the European Central Bank had attempted to require this, claiming that it could not be expected to provide liquidity to a non-Eurozone CCP like London-based LCH.

The ECB lost that case in a European court, but now sees an opportunity to prevail post-Brexit, when London will be not just non-Eurozone, but non-EU. Hollande is cheerleading that effort.

It is rather remarkable to see the ECB, which was only able to rescue European banks desperate for dollar funding during the crisis because of the provision of $300 billion in swap lines from the Fed, claiming that it can’t supply € liquidity to a non-Eurozone entity. How about swap lines with the BoE, which could then provide support to LCH if necessary. Or is the ECB all take, and no give?

Hollande (and other Europeans) are likely acting partly out of protectionist motives, to steal business for continental entities from London (and perhaps the US). But Hollande was also quite upfront about the punitive, retaliatory, and exemplary nature of this move:

“The City, which thanks to the EU, was able to handle clearing operations for the eurozone, will not be able to do them,” he said. “It can serve as an example for those who seek the end of Europe . . . It can serve as a lesson.” [Emphasis added.]

That will teach perfidious Albion for daring to leave the EU! Anyone else harboring such thoughts, take note!

The FT article does not indicate the location of M. Hollande’s nose, for he obviously just cut it off to spite his face.

In a more serious vein, this is no doubt part of the posturing that we will see ad nauseum in the next two plus years while the terms of the UK’s departure are negotiated. Stock up with supplies, because this is going to take a while, since (1) everything is negotiable, (2) almost all negotiations go to the brink of the deadline, or beyond, and (3) these negotiations will be particularly complicated because the Eurogarchs will be conducting them with an eye on how the outcome affects the calculations of other EU members contemplating following Britain out the door–and because immigration issues will loom over the negotiations.

When evaluating a negotiation, it’s best to start with the optimal, surplus maximizing “Coasean bargain” (a term which Coase actually didn’t like, but it is widely used). This, as Elon Musk would say, is a no brainer: allow € clearing in London, through LCH. That is, a maintenance of the status quo.

What are the alternatives? One would be that € clearing for those subject to EU regulation and some non-EU firms would take place in the Eurozone (say Paris or Frankfurt), some € clearing might take place in London or the US, and most dollar and other non-€ clearing would take place in London and the US.  This would require the EU to permit its banks to clear economically in the UK or US, by granting equivalence to non-EU CCPs for non-€ trades, or something similar.

There are several inefficiencies here. First, it would fragment netting sets and increase the probability that one CCP goes bust. For instance, if a bank that is a member of an EU and a non-EU CCP (as would almost certainly be the case of the large European banks that do business in all major currencies) defaulted, it is possible that it could have a loss on its € deals and a gain on its non-€ deals (or vice versa). If those were cleared in a single CCP, the gain and loss could be offset, thereby reducing the CCP’s loss, and perhaps resulting in no loss to the CCP at all: this is what happened with Lehman at the CME, where losses on some of its positions were greater than collateral, but losses on others were smaller, and the total loss was less than total collateral. However, if the business was split, one of the CCPs could suffer a loss that could potentially put it in jeopardy, or force members to stump up additional contributions to the default fund during a time when they are financially stressed.

Second, default management would be more difficult, risky and costly if split across two or more CCPs. It would be easier to put in place dirty hedges for a broader portfolio than two narrower ones, and to allocate or auction off a combined portfolio than fragmented ones. Moreover, it would be necessary to coordinate default management across CCPs in a situation where their interests are not completely aligned, and indeed, where interests may be strongly in conflict. Furthermore, there would be duplication of personnel, as CCP members would be required to dispatch people to two different CCPs to manage the default.

Third, even during “peacetime,” fragmented clearing would sacrifice collateral and capital efficiencies and increase operational costs and complexity.

But it could be worse! Maybe the Europeans will cut off their noses and ears (and maybe some other parts lower down), and deny a UK CCP equivalence for any transaction undertaken by an EU bank. The outcome would be EU banks clearing in Europe, and most everybody else clearing outside of Europe. This would result in multiple inefficiently small CCPs clearing in all currencies that would exacerbate all of the negative consequences just outlined: netting set inefficiencies would be even worse, default risk management even more difficult, and peacetime collateral, capital, and operational efficiencies would be even worse.

Oh, and this alternative would require the ECB to obtain dollar and sterling (and other currency) liquidity lines to allow it to provide non-€ liquidity to its precious little CCP. How hypocritical is that? (Not that hypocrisy would cost Hollande et al any sleep. It hasn’t yet.)

The fact is that CCPs exhibit strong economies of scale and scope, and although mega-CCPs concentrate risk, fragmentation creates its own special problems.

So the wealth-maximizing outcome would be for the EU to come to an accommodation on central clearing that would effectively perpetuate the pre-Brexit status quo. Wealth maximization exercises a strong pull, meaning that this is the most likely outcome, although there will likely be a lot of posturing, bluffing, threatening, etc., before this outcome is achieved (and at the last minute).

I would expect that EU banks would support the Coasean bargain, further increasing its political viability. Yes, Deutsche Borse would be pushing for a EU-centric outcome, and some Europols would take pride at having their own (sub-scale and/or sub-scope) CCP, but the greater cost and risk imposed on banks would almost certainly induce them to put heavy pressure behind a status quo-preserving deal.

This raises the issue of negotiation of banking and capital market issues more generally. There has been a lot of attention paid to the fact that British banks would probably lose passporting rights into the EU post-exit, and this would be costly for them. But European banks actually rely even more on passporting to get access to London. Since London is still almost certain to remain the dominant financial center (especially since the UK government will have a tremendous incentive to facilitate that), European banks would suffer as much or more than UK ones if the passporting system was eliminated (and a close substitute was not created).

Thus, if the negotiations were only about clearing, banking, and capital markets, mutual self-interest (and political economy, given the huge influence of the finance sector on policymakers) would strongly favor a deal that would largely maintain the status quo. But of course the negotiations are not about these issues alone. As I’ve already noted, the EU may try to punish the British even if it also takes a hit because of the effect this might have on the calculations of others who might bolt from the Union.

Furthermore, the most contentious issue–immigration–is very much in play. Merkel, Hollande, and others have said that to obtain a Norway-style relationship with the EU, the UK would have to agree to unlimited movement of people. But that issue is the one that drove the Leave vote, and agreeing to this would be viewed as a gutting of the referendum, and a betrayal. It will be hard for the UK to agree to that.

Perhaps even this could be finessed if the EU secured its borders, but Merkel’s insanity on this issue (and the insanity of other Eurogarchs) makes this unlikely, short of a populist political explosion within the EU. But if that happens, negotiations between the EU and the UK will likely be moot, because there won’t be much of the EU left to negotiate with, or worth negotiating with.

In sum, if it were only about banking and clearing, economic self-interest would lead all parties to avoid mutually destructive protectionism in these areas. But highly emotional issues, political power, and personal pride are also present, and in spades. Thus, I am reluctant to bet much on the consummation of the economically efficient deal on financial issues. The financial sector is just one bargaining chip in a very big game.

 

Print Friendly

Next Page »

Powered by WordPress