Streetwise Professor

September 16, 2016

De Minimis Logic

CFTC Chair Timothy Massad has come out in support of a one year delay of the lowering of the de minimis swap dealer exemption notional amount from $8 billion to $3 billion. I recall Coase  (or maybe it was Stigler) writing somewhere that an economist could pay for his lifetime compensation by delaying implementation of an inefficient law by even a day. By that reckoning, by delaying the step down of the threshold for a year Mr. Massad has paid for the lifetime compensation of his progeny for generations to come, for the de minimis threshold is a classic analysis of an inefficient law. Mr. Massad (and his successors) could create huge amounts of wealth by delaying its implementation until the day after forever.

There are at least two major flaws with the threshold. The first is that there is a large fixed cost to become a swap dealer. Small to medium-sized swap traders who avoid the obligation of becoming swap dealers under the $8 billion threshold will not avoid it under the lower threshold. Rather than incur the fixed cost, many of those who would be caught with the lower threshold will decide to exit the business. This will reduce competition and increase concentration in the swap market. This is perversely ironic, given that one ostensible purpose of Frankendodd (which was trumpeted repeatedly by its backers) was to increase competition and reduce concentration.

The second major flaw is that the rationale for the swap dealer designation, and the associated obligations, is to reduce risk. Big swap dealers mean big risk, and to reduce that risk, they are obligated to clear, to margin non-cleared swaps, and hold more capital. But notional amount is a truly awful measure of risk. $X billion of vanilla interest rate swaps differ in risk from $X billion of CDS index swaps which differ in risk from $X billion of single name CDS which differ in risk from $X billion of oil swaps. Hell, $X billion of 10 year interest rate swaps differ in risk from $X billion of 2 year interest rate swaps. And let’s not even talk about the variation across diversified portfolios of swaps with the same notional values. So notional does not match up with risk in a discriminating way.  Further, turnover doesn’t measure risk very well either.

But hey! We can measure notional! So notional it is! Yet another example of the regulatory drunk looking for his keys under the lamppost because that’s where the light is.

So bully for Chairman Massad. He has delayed implementation of a regulation that will do the opposite of some of the things it is intended to do, and merely fails to do other things it is supposed to do. Other than that, it’s great!

Print Friendly

September 12, 2016

The New Deal With Chinese Characteristics

Filed under: China,Commodities,Economics,History,Politics,Regulation — The Professor @ 1:03 pm

When I was in Singapore last week I spoke at the FT Asia Commodities Summit. Regardless of whether the subject was ags or energy or metals, China played an outsized role in the discussion. In particular, participants focused on China’s newish “supply side” policy.

There is little doubt that the policy–which focuses on reducing capacity, or at least output in steel, coal, and other primary industries–has had an impact on prices. Consider coking coal:

Coking coal, the material used by steelmakers to fire their blast furnaces, has become the best performing commodity of 2016 after surging more than 80 per cent over the past month on the back of production curbs and flooding in China.

Premium hard Australian coking coal delivered to China hit $180.9 a tonne on Friday, this highest level since price reporting agency Steel Index began publishing assessments in 2013. It has risen 131 per cent since the start of the year, outpacing gold, silver, iron ore and zinc — other top performing commodities.

The main driver of the rally — which has also roiled thermal coal — is Beijing’s decision to restrict the number of working days at domestic mines to 276 days per year from 330 previously.

This policy is aimed at the improving the profitability of producers so they can repay loans to local banks. But it has reduced output and forced traders and steel mills to buy imported material from what is known as the seaborne market.

80 percent. In a month.

Or thermal coal:

Newcastle thermal coal is heading for the first annual gain in six years as China seeks to cut overcapacity and curb pollution. While the timing of the output adjustment is unavailable, it may start in September or October after recent price gains, Citigroup said in the report dated Sept. 8. Bohai-Rim is 26 percent higher from a year ago, when it was 409 yuan, while Newcastle has climbed as much as 40 percent this year.

The phrase “supply side reform” actually fits rather awkwardly here, at least to a Western ear. That phrase connotes the reduction of regulatory and tax burdens as a means of promoting economic growth. But Supply Side Reform With Chinese Characteristics means increasing the government’s role in managing the economy.

A better description would be that this is The New Deal With Chinese Characteristics. FDR’s New Deal was largely a set of measures to cartelize major US industries, in an effort to raise prices. The economic “thinking” behind this was completely wrongheaded, and motivated by the idea that there was “ruinous competition” in product and labor markets that required government intervention to fix. Apparently the higher prices and wages were supposed to increase aggregate demand. Or something. But although the New Deal foundered on Constitutional shoals only a few years after its passage, in its brief existence it had proven to be an economic nightmare rent by contradictions. For instance, if you increase prices in an upstream industry, that is detrimental to the downstream sector for which the upstream industry’s outputs are inputs. According to scholarship dating back to Milton Friedman and Anna Schwartz, and continuing through recent work by Cole and Ohanian,  interference in the price mechanism and forced cartelization slowed the US’s recovery from the monetary shock that caused the Great Depression.

The motivation for the Chinese policy is apparently not so much to facilitate the rationalization of capacity in sectors with too much of it, but to increase revenue of firms in these sectors in order to permit them to pay back debt to banks and the holders of wealth management products (which often turn out to be banks too). Further, the policy is also driven by a need to sustain employment in these industries. Thus, the policies are intended to prop up the financially weakest and least efficient companies, rather than cull them.

So step back for a minute and contemplate what this means. Through a variety of policies, including most notably financial repression (that made capital artificially cheap) and credit stimulus, China encouraged massive investment in the commodities and primary goods sectors. These policies succeeded too well: they encouraged massive over-investment. So to offset that, and to mitigate the financial consequences for lenders, local governments, and workers, China is intervening to restrict output to raise prices. Rather than encouraging the correction of past errors, the new policy is perpetuating them, and creating new ones.

Remind me again how China’s government got the reputation as master economic managers, because I’m not seeing it. This is an example of a wasteful response to wasteful over-investment: waste coming and going. Further, it involves an increase in government intervention, which obviously has those in favor a more liberal (in the Smithian sense) free market policy rather distraught, and which foreshadows even more waste in the future.

The policy is also obviously fraught with tensions, because it pits those consuming primary and intermediate goods against those producing them–and against the banks who are now more likely to get their money back. That is, it is a backdoor bank (and WMP) bailout, the costs of which will be borne by the consumers of the goods produced by industries that were supersized by past government profligacy.

Ironically, the policy also stokes something that the government purports to hate: speculation. Policy volatility encourages speculation on the goods and industries affected by these policies. The large movements in prices in the coal and iron-steel sectors in response to policy changes provide a strong incentive to speculate on future policy changes.

Further, it creates the potential for moral hazard in the future. Future lenders (and purchasers of WMP) will look back on this policy and conclude that the government may well undertake backdoor bailouts if the companies they have lent to run into difficulties. This is hardly conducive to prudent lending and investment.

This is not foresighted policy. It is extemporizing to fix near-term problems, most of which were created by past measures to fix near-term problems. There is a Three Stooges aspect to the entire endeavor.

Of course, it’s an ill wind that blows no one any good. Glencore is no doubt very grateful for Chairman Xi’s heavy-handed policy intervention. It has probably played a larger role in bringing the company back from the brink than did the company’s prudent efforts to cut debt. But it is probably too late, alas, for Peabody Coal, and Arch Coal, and all those “coal people” whom former empathizer in chief Bill Clinton mocked last week. The ingrates!

The bottom line is that China is the 800 pound gorilla of the commodity markets, and shifts in its policies can lead to huge moves in commodity prices. Given that these policy shifts are driven by the crisis du jour (e.g., commodity producer shakiness threatening to make banks and local governments shaky) rather than good economics, and that these policy shifts are difficult to predict given the opacity and centralization of Chinese decision making, they add to substantial additional volatility in commodity prices and commodity markets: who can read the gorilla’s mind (which he changes often)?, and woe to those who read it wrong.

Print Friendly

September 6, 2016

HKEx: Improving Warehousing in China, or Creating a Shadow Banking Vehicle?

Filed under: Clearing,Commodities,Derivatives,Economics,Regulation — The Professor @ 9:33 pm

I am on my last day in Singapore, where I participated in the rollout of Trafigura’s Commodities Demystified hosted by IE Singapore.  The event was very well attended (an overflow crowd) and the presentation and new publication (which builds off the conceptual framework of my 2013 white paper The Economics of Commodity Trading Firms) was well-received. It helps fill a yawning gap in knowledge about what commodity traders are and what they do.

In addition to that event, I spoke as a panelist at the FT’s Commodities Asia Summit. One of the main speakers was Charles Li, CEO of Hong Kong Exchanges and Clearing, who laid out his ambitions for plans in mainland China. Things started out well. Whereas expectations were that HKEx would create a modest spot metals trading platform in China (because it doesn’t have and is unlikely to receive a license for trading futures), Li stated that HKEx (which owns the LME) would attempt to create a “lookalike” LME metals warehousing system. In the aftermath of the Qingdao fiasco this could be a very salutary development.

I would suggest caution, however. This may be easier said than done. While Li was describing this, my mind immediately turned to a paper I wrote over 2 decades ago about the successes and failures of commodity exchanges. One of the signal failures occurred when the Chicago Board of Trade attempted to tame the depredations of grain warehouses in the 1860s. Public storage was rife with all sorts of fraud and illicit dealing. The quality and quantity of grain being stored was a mystery, and warehousemen played all sorts of games to exploit their customers. The CBT, acting in the interests of traders who relied on the warehouses, attempted to impose rules and regulations on them, but failed utterly. Eventually the State of Illinois had to pass legislation to rein in some of the warehousemen’s more outrageous actions. Furthermore, larger traders integrated into warehousing, and eventually public storage became primarily ancillary to futures trading (i.e., to facilitate delivery against futures).

The CBT’s problem is that it did not have an adequate stick to beat the warehousemen into compliance. They were kicked out of the exchange, but the gains of being able to trade futures were smaller than the gains from operating warehouses outside the CBT’s rules.

Public warehousing has proved problematic in commodities to the present day. The LME’s travails with aluminum warehousing are just one example, but others abound in commodities including coffee, cocoa, and cotton. In cotton, for instance, even though warehouses are subject to federal regulation, there are chronic complaints that warehousemen do not load out cotton promptly, in order to enhance storage revenues.

So I wish Mr. Li luck. He’ll need it, especially since lacking the ability to deny those violating the warehouse rules from futures trading, he won’t even have the stick that proved inadequate for the CBT. Public warehousemen has long proved to be a very recalcitrant group, over time, place, and commodity.

Li specifically criticized the speculative nature of China’s futures exchanges, and claimed that his new venture would be for physical players, and that it would not be “another financial speculation forum.” But his follow on remarks gave a sense of cognitive dissonance. He said the system would allow banks and hedge funds to participate in the market.

More disconcertingly, he highlighted the effects of financial repression in China (without using the phrase), which leads investors looking for higher returns than are available in the banking sector to turn to alternative investment vehicles. Li specifically mentioned wealth management products, and suggested that metals stored in the warehouses his new venture would oversee could form the basis for such products. I understood him to say that while the warehouses would facilitate the typical function of commodity storage, i.e., filling and emptying in order to accommodate temporary supply and demand shocks, there would also be the possibility that metal would be locked up for long periods to provide the basis for these wealth management products. What I envision is something like physical metal ETFs that have been introduced in the West. These are primarily in precious metals. JP Morgan proposed a similar vehicle for copper, but backed off due to the pressure from Carl Levin and others a couple of summers ago.

In other words, the new warehousing system would be part of the shadow banking system thereby providing a new speculative vehicle for Chinese investors desperate to circumvent financial repression. Hence my cognitive dissonance.

I would also note that even a purely physical spot exchange can be a speculative venue, through buying and selling and borrowing/carrying warehouse receipts. The New York Gold Exchange of Black Friday infamy was hugely speculative, even though it was purely a spot physical exchange.

I also heard Li to say that the venture would guarantee transactions, though I didn’t fully catch what would be guaranteed. Would the exchange be insuring those storing their metal against a Qingdao type event? If so, that’s a pretty audacious plan, and one fraught with risk.

This was just a speech at a conference. It will be interesting to see a fully-fleshed out plan. It will be particularly interesting to see how the enforcement mechanism for the warehouse regulation will work, and it will be especially particularly interesting to see whether this venture is indeed just viewed as a mechanism for improving the efficiency of the physical metals market in China, or whether it will be a clever way to tap into the intense interest of investors large and small in China to speculate and find better returns than those on offer in the banking system. That is, will this be another speculative venue, but one masquerading as a staid market for physical players. Given the way China works, I’d bet on the latter. Pun intended.

Print Friendly

August 23, 2016

Carl Icahn Rails Against the Evils of RIN City

Filed under: Climate Change,Commodities,Economics,Energy,Politics,Regulation — The Professor @ 12:15 pm

Biofuel Renewable Identification Numbers–“RINs”–are back in the news because of a price spike in June and July (which has abated somewhat). This has led refiners to intensify their complaints about the system. The focus of their efforts at present is to shift the compliance obligation from refiners to blenders. Carl Icahn has been quite outspoken on this. Icahn blames everyone, pretty much, including speculators:

“The RIN market is the quintessential example of a ‘rigged’ market where large gas station chains, big oil companies and large speculators are assured to make windfall profits at the expense of small and midsized independent refineries which have been designated the ‘obligated parties’ to deliver RINs,” Icahn wrote.

“As a result, the RIN market has become ‘the mother of all short squeezes,”‘ he added. “It is not too late to fix this problem if the EPA acts quickly.”

Refiners are indeed hurt by renewable fuel mandates, because it reduces the derived demand for the gasoline they produce. The fact that the compliance burden falls on them is largely irrelevant, however. This is analogous to tax-incidence analysis: the total burden of a tax, and the distribution of a tax, doesn’t depend on who formally pays it. In the case of RINs, the total burden of the biofuels mandate and the distribution of that burden through the marketing chain doesn’t depend crucially on whether the compliance obligation falls on refiners, blenders, or your Aunt Sally.

Warning: There will be math!

A few basic equations describing the equilibrium in the gasoline, ethanol, biodiesel and RINs markets will hopefully help structure the analysis*. First consider the case in which the refiners must acquire RINs:

Screen Shot 2016-08-23 at 10.20.03 AM

Equation (1) is the equilibrium in the retail gasoline market. The retail price of gasoline, at the quantity of gasoline consumed, must equal the cost of blendstock (“BOB”) plus the price of the ethanol blended with it. The R superscript on the BOB price reflects that this is the price when refiners must buy a RIN. This equation assumes that one gallon of fuel at the pump is 90 percent BOB, and 10 percent ethanol. (I’m essentially assuming away blending costs and transportation costs, and a competitive blending industry.) The price of a RIN does not appear here because either the blender buys ethanol ex-RIN, or buys it with a RIN and then sells that to a refiner.

Equation (2) is the equilibrium in (an assumed competitive) ethanol market. The price an ethanol producer receives is the price of ethanol plus the price of a RIN (because the buyer of ethanol gets a RIN that it can sell, and hence is willing to pay more than the energy value of ethanol to obtain it). In equilibrium, this price equals the the marginal cost of producing ethanol. Crucially, with a binding biofuels mandate, the quantity of ethanol produced is determined by the blendwall, which is 10 percent of the total quantity sold at the pump.

Equation (3) is equilibrium in the biodiesel market. When the blendwall binds, the mandate is met by meeting the shortfall between mandate and the blendwall by purchasing RINs generated from the production of biodiesel. Thus, the RIN price is driven to the difference between the cost of producing the marginal gallon of biodiesel, and the price of biodiesel necessary to induce consumption of sufficient biodiesel to sop up the excess production stimulated by the need to obtain RINs. In essence, the price of biodiesel plus the cost of a RIN generated by production of biodiesel must equal the marginal cost of producing it. The amount of biodiesel needed is given by the difference between the mandate quantity and the quantity of ethanol consumed at the blendwall. The parameter a is the amount of biofuel per unit of fuel consumed required by the Renewable Fuel Standard.

Equation (4) is equilibrium in the market for blendstock–this is the price refiners get. The price of BOB equals the marginal cost of producing it, plus the cost of obtaining RINs necessary to meet the compliance obligation. The marginal cost of production depends on the quantity of gasoline produced for domestic consumption (which is 90 percent of the retail quantity of fuel purchased, given a 10 percent blendwall). The price of a RIN is multiplied by a because that is the number of RINs refiners must buy per gallon of BOB they sell.

Equation (5) just says that the value of ethanol qua ethanol is driven by the relative octane values between it and BOB.

The exogenous variables here are the demand curve for retail gasoline; the marginal cost of producing ethanol; the marginal cost of producing BOB (which depends on the price of crude, among other things); the marginal cost of biodiesel production; the demand for biodiesel; and the mandated quantity of RINs (and also the location of the blendwall). Given these variables, prices of BOB, ethanol, RINs, and biodiesel will adjust to determine retail consumption and exports.

Now consider the case when the blender pays for the RINs:

Screen Shot 2016-08-23 at 10.20.25 AM

Equation (6) says that the retail price of fuel is the sum of the value of the BOB and ethanol blended to create it, plus the cost of RINs required to meet the standard. The blender must pay for the RINs, and must be compensated by the price of the fuel. Note that the BOB price has a “B” superscript, which indicates that the BOB price may differ when the blender pays for the RIN from the case where the refiner does.

Without exports, retail consumption, ethanol production, biodiesel production, and BOB production will be the same regardless of where the compliance burden falls. Note that all relevant prices are determined by the equilibrium retail quantity. It is straightforward to show that the same retail quantity will clear the market in both situations, as long as:

Screen Shot 2016-08-23 at 10.20.35 AM

That is, when the refiner pays for the RIN, the BOB price will be higher than when the blender does by the cost of the RINs required to meet the mandate.

Intuitively, if the burden is placed on refiners, in equilibrium they will charge a higher price for BOB in order to cover the cost of complying with the mandate. If the burden is placed on blenders, refiners can sell the same quantity at a lower BOB price (because they don’t have to cover the cost of RINs), but blenders have to mark up the fuel by the cost of the RINs to cover their cost of acquiring them. here the analogy with tax incidence analysis is complete, because in essence the RFS is a tax on the consumption of fossil fuel, and the amount of the tax is the cost of a RIN.

This means that retail prices, consumption, production of ethanol, biodiesel and BOB, refiner margins and blender margins are the same regardless of who has the compliance obligation.

The blenders are complete ciphers here. If refiners have the compliance burden, blenders effectively buy RINs from ethanol producers and sell them to refiners. If the blenders have the burden, they buy RINs from ethanol producers and sell them to consumers. Either way, they break even. The marketing chain is just a little more complicated, and there are additional transactions in the RINs market, when refiners shoulder the compliance obligation.

Under either scenario, the producer surplus (profit, crudely speaking) of the refiners is driven by their marginal cost curves and the quantity of gasoline they produce. In the absence of exports, these things will remain the same regardless of where the burden is placed. Thus, Icahn’s rant is totally off-point.

So what explains the intense opposition of refiners to bearing the compliance obligation? One reason may be fixed administrative costs. If there is a fixed cost of compliance, that will not affect any of the prices or quantities, but will reduce the profit of the party with the obligation by the full amount of the fixed cost. This is likely a relevant concern, but the refiners don’t make it centerpiece of their argument, probably because shifting the fixed cost around has no efficiency effects, but purely distributive ones, and purely distributive arguments aren’t politically persuasive. (Redistributive motives are major drivers of attempts to change regulations, but naked cost shifting arguments look self-serving, so rent seekers attempt to dress up their efforts in efficiency arguments: this is one reason why political arguments over regulations are typically so dishonest.) So refiners may feel obliged to come up with some alternative story to justify shifting the administrative cost burden to others.

There may also be differences in variable administrative costs. Fixed administrative costs won’t affect prices or output (unless they are so burdensome as to cause exit), but variable administrative costs will. Further, placing the compliance obligation on those with higher variable administrative costs will lead to a deadweight loss: consumers will pay more, and refiners will get less.

Another reason may be the seen-unseen effect. When refiners bear the compliance burden, the cost of buying RINs is a line item in their income statement. They see directly the cost of the biofuels mandate, and from an accounting perspective they bear that cost, even though from an economic perspective the sharing of the burden between consumers, refiners, and blenders doesn’t depend on where the obligation falls. What they don’t see–in accounting statements anyways–is that the price for their product is higher when the obligation is theirs. If the obligation is shifted to blenders, they won’t see their bottom line rise by the amount they currently spend on RINS, because their top line will fall by the same amount.

My guess is that Icahn looks at the income statements, and mistakes accounting for economics.

Regardless of the true motive for refiners’ discontent, the current compliance setup is not a nefarious conspiracy of integrated producers, blenders, and speculators to screw poor independent refiners. With the exception of administrative cost burdens (which speculators could care less about, since it will not fall on them regardless), shifting the compliance burden will not affect the market prices of RINs or the net of RINs price that refiners get for their output.

With respect to speculation, as I wrote some time ago, the main stimulus to speculation is not where the compliance burden falls (because again, this doesn’t affect anything relevant to those speculating on RINs prices). Instead, one main stimulus is uncertainty about EPA policy–which as I’ve written, can lead to some weird and potentially destabilizing feedback effects. The simple model sheds light on other drivers of speculation–the exogenous variables mentioned above. To consider one example, a fall in crude oil prices reduces the marginal cost of BOB production. All else equal, this encourages retail consumption, which increases the need for RINs generated from biodiesel, which increases the RINs price.

The Renewable Fuels Association has also raised a stink about speculation and the volatility of RINs prices in a recent letter to the CFTC and the EPA. The RFA (acronyms are running wild!) claims that the price rise that began in May cannot be explained by fundamentals, and therefore must have been caused by speculation or manipulation. No theory of manipulation is advanced (corner/squeeze? trade-based? fraud?), making the RFA letter another example of the Clayton Definition of Manipulation: “any practice that doesn’t suit the person speaking at the moment.” Regarding speculation, the RFA notes that supplies of RINs have been increasing. However, as has been shown in academic research (some by me, some by people like Brian Wright)  that inventories of a storable commodity (which a RIN is) can rise along with prices in a variety of circumstances, including a rise in volatility, or an increase in anticipated future demand. (As an example of the latter case, consider what happened in the corn market when the RFS was passed. Corn prices shot up, and inventories increased too, as consumption of corn was deferred to the future to meet the increased future demand for ethanol. The only way of shifting consumption was to reduce current consumption, which required higher prices.)

In a market like RINs, where there is considerable policy uncertainty, and also (as I’ve noted in past posts) complicated two-way feedbacks between prices and policy, the first potential cause is plausible. Further, since a good deal of the uncertainty relates to future policy, the second cause likely operates too, and indeed, these two causes can reinforce one another.

Unlike in the 2013 episode, there have been no breathless (and clueless) NYT articles about Morgan or Goldman or other banks making bank on RIN speculation. Even if they have, that’s not proof of anything nefarious, just an indication that they are better at plumbing the mysteries of EPA policy.

In sum, the recent screeching from Carl Icahn and others about the recent ramp-up in RIN prices is economically inane, and/or unsupported by evidence. Icahn is particularly misguided: RINs are a tax, and the burden of the tax depends not at all on who formally pays the tax. The costs of the tax are passed upstream to consumers and downstream to producers, regardless of whether consumers pay the tax, producers pay the tax, or someone in the middle pays the tax. As for speculation in RINs it is the product of government policy. Obviously, there wouldn’t be speculation in RINs if there aren’t RINs in the first place. But on a deeper level, speculation is rooted in a mandate that does not correspond with the realities of the vast stock of existing internal combustion engines; the EPA’s erratic attempt to reconcile those irreconcilable things; the details of the RFS system (e.g., the ability to meet the ethanol mandate using biodiesel credits); and the normal vicissitudes of the energy supply and demand.  Speculation is largely a creation of government regulation, ironically, so to complain to the government about it (the EPA in particular) is somewhat perverse. But that’s the world we live in now.

* I highly recommend the various analyses of the RINs and ethanol markets in the University of Illinois’ Farm Doc Daily. Here’s one of their posts on the subject, but there are others that can be found by searching the website. Kudos to Scott Irwin and his colleagues.

Print Friendly

August 20, 2016

On Net, This Paper Doesn’t Tell Us Much About What We Need to Know About the Effects of Clearing

Filed under: Clearing,Derivatives,Economics,Financial crisis,Politics,Regulation — The Professor @ 4:26 pm

A recent Office of Financial Research paper by Samim Ghamami and Paul Glasserman asks “Does OTC Derivatives Reform Incentivize Central Clearing?” Their answer is, probably not.

My overarching comment is that the paper is a very precise and detailed answer to maybe not the wrong question, exactly, but very much a subsidiary one. The more pressing questions include: (i) Do we want to favor clearing vs. bilateral? Why? What metric tells us that is the right choice? (The paper takes the answer to this question as given, and given as “yes.”) (ii) How do the different mechanisms affect the allocation of risk, including the allocation of risk outside the K banks that are the sole concern in the paper? (iii) How will the rules affect the scale of derivatives trading (the paper takes positions as given) and the allocation across cleared and bilateral instruments? (iv) Following on (ii) and (iii) will the rules affect risk management by end-users and what is the implication of that for the allocation of risk in the economy?

Item (iv) has received too little attention in the debates over clearing and collateral mandates. To the extent that clearing and collateral mandates make it more expensive for end-users to manage risk, how will the end users respond? Will they adjust capital structures? Investment? The scale of their operations? How will this affect the allocation of risk in the broader economy? How will this affect output and growth?

The paper also largely ignores one of the biggest impediments to central clearing–the leverage ratio.  (This regulation receives on mention in passing.) The requirement that even segregated client margins be treated as assets for the purpose of calculating this ratio (even though the bank does not have a claim on these margins) greatly increases the capital costs associated with clearing, and is leading some banks to exit the clearing business or to charge fees that make it too expensive for some firms to trade cleared derivatives. This brings all the issues in (iv) to the fore, and demonstrates that certain aspects of the massive post-crisis regulatory scheme are not well thought out, and inconsistent.

Of course, the paper also focuses on credit risk, and does not address liquidity risk issues at all. Perhaps this is a push between bilateral vs. cleared in a world where variation margin is required for all derivatives transactions, but still. The main concern about clearing and collateral mandates (including variation margin) is that they can cause huge increases in the demand for liquidity precisely at times when liquidity dries up. Another concern is that collateral supply mechanisms that develop in response to the mandates create new interconnections and new sources of instability in the financial system.

The most disappointing part of the paper is that it focuses on netting economies as the driver of cost differences between bilateral and cleared trading, without recognizing that the effects of netting are distributive. To oversimplify only a little, the implication of the paper is that the choice between cleared and bilateral trading is driven by which alternative redistributes the most risk to those not included in the model.

Viewed from that perspective, things look quite different, don’t they? It doesn’t matter whether the answer to that question is “cleared” or “bilateral”–the result will be that if netting drives the answer, the answer will result in the biggest risk transfer to those not considered in the model (who can include, e.g., unsecured creditors and the taxpayers). This brings home hard the point that these types of analyses (including the predecessor of Ghamami-Glasserman, Zhu-Duffie) are profoundly non-systemic because they don’t identify where in the financial system the risk goes. If anything, they distract attention away from the questions about the systemic risks of clearing and collateral mandates. Recognizing that the choice between cleared and bilateral trading is driven by netting, and that netting redistributes risk, the question should be whether that redistribution is desirable or not. But that question is almost never asked, let alone answered.

One narrower, more technical aspect of the paper bothered me. G-G introduce the concept of a concentration ratio, which they define as the ratio of a firm’s contribution to the default fund to the firm’s value at risk used to determine the sizing of the default fund. They argue that the default fund under a cover two standard (in which the default fund can absorb the loss arising from the simultaneous defaults of the two members with the largest exposures) is undersized if the concentration ratio is less than one.

I can see their point, but its main effect is to show that the cover two standard is not joined up closely with the true determinants of the risk exposure of the default fund. Consider a CCP with N identical members, where N is large: in this case, the concentration ratio is small. Further, assume that member defaults are independent, and occur with probability p. The loss to the default fund conditional on the default of a given member is X. Then, the expected loss of the default fund is pNX, and under cover two, the size of the fund is 2X.  There will be some value of N such that for a larger number of members, the default fund will be inadequate. Since the concentration ratio varies inversely with N, this is consistent with the G-G argument.

But this is a straw man argument, as these assumptions are obviously extreme and unrealistic. The default fund’s exposure is driven by the extreme tail of the joint distribution of member losses. What really matters here is tail dependence, which is devilish hard to measure. Cover two essentially assumes a particular form of tail dependence: if the 1st (2nd) largest exposure defaults, so will the 2nd (1st) largest, but it ignores what happens to the remaining members. The assumption of perfect tail dependence between risks 1 and 2 is conservative: ignoring risks 3 through N is not. Where things come out on balance is impossible to determine. Pace G-G, when N is large ignoring 3-to-N is likely very problematic, but whether this results in an undersized default fund depends on whether this effect is more than offset by the extreme assumption of perfect tail dependence between risks 1 and 2.

Without knowing more about the tail dependence structure, it is impossible to play Goldilocks and say that this default fund is too large,  this default fund is too small, and this one is just right by looking at N (or the concentration ratio) alone. But if we could confidently model the tail dependence, we wouldn’t have to use cover two–and we could also determine individual members’ appropriate contributions more exactly than relying on a pro-rata rule (because we could calculate each member’s marginal contribution to the default fund’s risk).

So cover two is really a confession of our ignorance. A case of sizing the default fund based on what we can measure, rather than what we would like to measure, a la the drunk looking for his keys under the lamppost, because the light is better there. Similarly, the concentration ratio is something that can be measured, and does tell us something about whether the default fund is sized correctly, but it doesn’t tell us very much. It is not a sufficient statistic, and may not even be a very revealing one. And how revealing it is may differ substantially between CCPs, because the tail dependence structures of members may vary across them.

In sum, the G-G paper is very careful, and precisely identifies crucial factors that determine the relative private costs of cleared vs. bilateral trading, and how regulations (e.g., capital requirements) affect these costs. But this is only remotely related to the question that we would like to answer, which is what are the social costs of alternative arrangements? The implicit assumption is that the social costs of clearing are lower, and therefore a regulatory structure which favors bilateral trading is problematic. But this assumes facts not in evidence, and ones that are highly questionable. Further, the paper (inadvertently) points out a troubling reality that should have been more widely recognized long ago (as Mark Roe and I have been arguing for years now): the private benefits of cleared vs. bilateral trading are driven by which offers the greatest netting benefit, which also just so happens to generate the biggest risk transfer to those outside the model. This is a truly systemic effect, but is almost always ignored.

In these models that focus on a subset of the financial system, netting is always a feature. In the financial system at large, it can be a bug. Would that the OFR started to investigate that issue.

Print Friendly

August 5, 2016

Bipartisan Stupidity: Restoring Glass-Steagall

Filed under: Economics,Financial crisis,Financial Crisis II,Politics,Regulation — The Professor @ 6:35 pm

Both parties officially favor a restoration of Glass-Steagall, the Depression-era banking regulation that persisted until repealed under the Clinton administration in 1999. When both Parties agree on an issue, they are likely wrong, and that is the case here.

The homage paid to Glass-Steagall is totem worship, not sound economic policy. The reasoning appears to be that the banking system was relatively quiescent when Glass-Steagall was in place, and a financial crisis occurred within a decade after its repeal. Ergo, we can avoid financial crises by restoring G-S. This makes as much sense as blaming the tumult of the 60s on auto companies’ elimination of tail fins.

Glass-Steagall had several parts, some of which are still in existence. The centerpiece of the legislation was deposit insurance, which rural and small town banking interests had been pushing for years. Deposit insurance is still with us, and its effects are mixed, at best.

One of the parts of Glass-Steagall that was abolished was its limitation on bank groups: the 1933 Act made it more difficult to form holding companies of multiple banks as a way of circumventing branch banking restrictions that were predominant at the time. This was perverse because (1) the Act was ostensibly intended to prevent banking crises, and (2) the proliferation of unit banks due to restrictions on branch banking was one of the most important causes of the banking crisis that ushered in the Great Depression.

The contrast between the experiences of Canada and the United States is illuminating in this regard. Both countries were subjected to a huge adverse economic shock, but Canada’s banking system, which was dominated by a handful of banks that operated branches throughout the country, survived, whereas the fragmented US banking system collapsed. In the 1930s, too big to fail was less of a problem than to small to survive. The collapse of literally thousands of banks devastated the US economy, and this banking crisis ushered in the Depression proper. Further, the inability of branched national banks to diversify liquidity risk (as Canada’s banks were able to do) made the system more dependent on the Fed to manage liquidity shocks. That turned out to be a true systemic risk, when the Fed botched the job (as documented by Friedman and Schwartz). When the system is very dependent on one regulatory body, and that body fails, the effect of the failure is systemic.

The vulnerability of small unit banks was again demonstrated in the S&L fiasco of the 1980s (a crisis in which deposit insurance played a part).

So that part of Glass-Steagall should remain dead and buried.

The part of Glass-Steagall that was repealed, and which its worshippers are most intent on restoring, was the separation of securities underwriting from commercial banking and the limiting of banks securities holdings to investment grade instruments.

Senator Glass believed that the combination of commercial and investment banking contributed to the 1930s banking crisis. As is the case with many legislators, his fervent beliefs were untainted by actual evidence. The story told at the time (and featured in the Pecora Hearings) was that commercial banks unloaded their bad loans into securities, which they dumped on an unsuspecting investing public unaware that they were buying toxic waste.

There are only two problems with this story. First, even if true, it would mean that banks were able to get bad assets off their balance sheets, which should have made them more stable! Real money investors, rather than leveraged institutions were wearing the risk, which should have reduced the likelihood of banking crises.

Second, it wasn’t true. Economists (including Kroszner and Rajan) have shown that securities issued by investment banking arms of commercial banks performed as well as those issued by stand-alone investment banks. This is inconsistent with the asymmetric information story.

Now let’s move forward almost 60 years and try to figure whether the 2008 crisis would have played out much differently had investment banking and commercial banking been kept completely separate. Almost certainly not. First, the institutions in the US that nearly brought down the system were stand alone investment banks, namely Lehman, Bear-Sterns, and Merrill Lynch. The first failed. The second two were absorbed into commercial banks, the first by having the Fed take on most of the bad assets, the second in a shotgun wedding that ironically proved to make the acquiring bank–Bank of America–much weaker. Goldman Sachs and Morgan-Stanley were in dire straits, and converted into banks so that they could avail themselves of Fed support denied them as investment banks.

The investment banking arms of major commercial banks like JP Morgan did not imperil their existence. Citi may be something of an exception, but earlier crises (e.g., the Latin American debt crisis) proved that Citi was perfectly capable of courting insolvency even as a pure commercial bank in the pre-Glass-Steagall repeal days.

Second, and relatedly, because they could not take deposits, and therefore had to rely on short term hot money for funding, the stand-alone investment banks were extremely vulnerable to funding runs, whereas deposits are a “stickier,” more stable source of funding. We need to find ways to reduce reliance on hot funding, rather than encourage it.

Third, Glass-Steagall restrictions weren’t even relevant for several of the institutions that wreaked the most havoc–Fannie, Freddie, and AIG.

Fourth, insofar as the issue of limitations on the permissible investments of commercial banks is concerned, it was precisely investment grade–AAA and AAA plus, in fact–that got banks and investment banks into trouble. Capital rules treated such instruments favorably, and voila!, massive quantities of these instruments were engineered to meet the resulting demand. They way they were engineered, however, made them reservoirs of wrong way risk that contributed significantly to the 2008 doom loop.

In sum: the banking structures that Glass-Steagall outlawed didn’t contribute to the banking crisis that was the law’s genesis, and weren’t materially important in causing the 2008 crisis. Therefore, advocating a return to Glass-Steagall as a crisis prevention mechanism is wholly misguided. Glass-Steagall restrictions are largely irrelevant to preventing financial crises, and some of their effects–notably, the creation of an investment banking industry largely reliant on hot, short term money for funding–actually make crises more likely.

This is why I say that Glass-Steagall has a totemic quality. The reverence shown it is based on a fondness for the old gods who were worshipped during a time of relative economic quiet (even though that is the product of folk belief, because it ignores the LatAm, S&L, and Asian crises, among others, that occurred from 1933-1999). We had a crisis in 2008 because we abandoned the old gods, Glass and Steagall! If we only bring them back to the public square, good times will return! It is not based on a sober evaluation of history, economics,  or the facts.

An alternative tack is taken by Luigi Zingales. He advocates a return to Glass-Steagall in part based on political economy considerations, namely, that it will increase competition and reduce the political power of large financial institutions. As I argued in response to him over four years ago, these arguments are unpersuasive. I would add another point, motivated by reading Calamaris and Haber’s Fragile by Design: the political economy of a fragmented financial system can lead to disastrous results too. Indeed, the 1930s banking crisis was caused largely by the ubiquity of small unit banks and the failure of the Fed to provide liquidity in such a system that was uniquely dependent on this support. Those small banks, as Calomaris and Haber show, used their political power to stymie the development of national branched banks that would have improved systemic stability. The S&L crisis was also stoked by the political power of many small thrifts.*

But regardless, both the Republican and Democratic Parties have now embraced the idea. I don’t sense a zeal in Congress to do so, so perhaps the agreement of the Parties’ platforms on this issue will not result in a restoration of Glass-Steagall. Nonetheless, the widespread fondness for the 83 year old Act should give pause to those who look to national politicians to adopt wise economic policies. That fondness is grounded in a variety of religious belief, not reality.

*My reading of Calomaris and Haber leads me to the depressing conclusion that the political economy of banking is almost uniformly dysfunctional, at all times and at all places. In part this is because the state looks upon the banking system to facilitate fiscal objectives. In part it is because politicians have viewed the banking system as an indirect way of supporting favored domestic constituencies when direct transfers to these constituencies are either politically impossible or constitutionally barred. In part it is because bankers exploit this symbiotic relationship to get political favors: subsidies, restrictions on competition, etc. Even the apparent successes of banking legislation and regulation are more the result of unique political conditions rather than economically enlightened legislators. Canada’s banking system, for instance, was not the product of uniquely Canadian economic insight and political rectitude. Instead, it was the result of a political bargain that was driven by uniquely Canadian political factors, most notably the deep divide between English and French Canada. It was a venal and cynical political deal that just happened to have some favorable economic consequences which were not intended and indeed were not necessarily even understood or foreseen by those who drafted the laws.

Viewed in this light, it is not surprising that the housing finance system in the US, which was the primary culprit for the 2008 crisis, has not been altered substantially. It was the product of a particular set of political coalitions that still largely exist.

The history of federal and state banking regulation in the US also should give pause to those who think a minimalist state in a federal system can’t do much harm. Banking regulation in the small government era was hardly ideal.

Print Friendly

July 30, 2016

Say “Sayonara” to Destination Clauses, and “Konnichiwa” to LNG Trading

Filed under: Commodities,Derivatives,Economics,Energy,Politics,Regulation — The Professor @ 11:12 am

The LNG market is undergoing a dramatic change: a couple of years ago, I characterized it as “racing to an inflection point.” The gas glut that has resulted from slow demand growth and the activation of major Australian and US production capacity has not just weighed on prices, but has undermined the contractual structures that underpinned the industry from its beginnings in the mid-1960s: oil linked pricing in long term contracts; take-or-pay arrangements; and destination clauses. Oil linkage was akin to the drunk looking for his keys under the lamppost: the light was good there, but in recent years in particular oil and gas prices have become de-linked, meaning that the light shines in the wrong place. Take-or-pay clauses make sense as a way of addressing opportunism problems that arise in the presence of long-lived, specific assets, but the development of a more liquid short-term trading market reduces asset specificity. Destination clauses were a way that sellers with market power could support price discrimination (by preventing low-price buyers from reselling to those willing to pay higher prices), but the proliferation of new sellers has undermined that market power.

Furthermore, the glut of gas has undermined seller market and bargaining power, and buyers are looking to renegotiate deals done when market conditions were different. They are enlisting the help of regulators, and in Japan (the largest LNG purchaser), their call is being answered. Japan’s antitrust authorities are investigating whether the destination clauses violate fair trade laws, and the likely outcome is that these clauses will be retroactively eliminated, or that sellers will “voluntarily” remove them to preempt antitrust action.

It’s not as if the economics of these clauses have changed overnight: it’s that the changes in market fundamentals have also affected the political economy that drives antitrust enforcement. As contract and spot prices have diverged, and as the pattern of gas consumption and production has diverged from what existed at the time the contracts were formed, the deadweight costs of the clauses have increased, and these costs have fallen heavily on buyers. In a classic illustration of Peltzman-Becker-Stigler theories of regulation, regulators are responding to these efficiency and distributive changes by intervening to challenge contracts that they didn’t object to when conditions were different.

This development will accelerate the process that I wrote about in 2014. More cargoes will be looking for new homes, because the original buyers overbought, and this reallocation will spur short-term trading. This exogenous shock to short term trading will increase market liquidity and the reliability of short term/spot prices, which will spur more short term trading and hasten the demise of oil linking. The virtuous liquidity cycle was already underway as a result of the gas glut, and the emergence of the US as a supplier, but the elimination of destination clauses in legacy Japanese contracts will provide a huge boost to this cycle.

The LNG market may never look exactly like the oil market, but it is becoming more similar all the time. The intervention of Japanese regulators to strike down another barbarous relic of an earlier age will only expedite that process, and substantially so.

Print Friendly

July 23, 2016

The Medium is NOT the Message: Hillary’s Scheming Is

Filed under: Politics,Regulation — The Professor @ 12:35 pm

Wikileaks released over 20,000 documents from the Democratic National Committee. As one would expect when such a rock is turned over, this exposed a lot of disgusting wriggling creatures.

Yes, there is a lot of traffic regarding Trump. But the most damning material relates to the fact that the DNC was/is in the tank for Hillary, and schemed continuously and extensively to undermine Bernie Sanders.

The corruption of Hillary and the DNC is hardly surprising. It is her–and their–DNA. But it is illuminating to actually witness evidence of the machinations of this crowd.

One of the more fascinating aspects of this is the reaction of those who are at pains to ignore the content of the emails, and focus on Russia’s supposed  responsibility for the leak. Just a cursory scan of Twitter and the Internet revealed a disparate and rather motley cast of characters pushing this story, including John Schindler (status of pants unknown), BuzzFeed’s Miriam Elder, neocon thinktanker James Kirchick, and Gawker.

To some it is axiomatic. Wikileaks=Russia. At least Kirchick felt obligated to come up with a more elaborate theory. Putin wants Trump to win, and the leaked emails will enrage the Bernie supporters who are also Wikileaks and RT afficionados. These disaffected Berners will either not vote or will go to Trump.

Whatever. In these situations, ALWAYS use Occam’s Razor, and that cuts against such a baroque theory. The far more parsimonious explanation is that an outraged Bernie supporter in the DNC (you don’t think there are Feel the Berners working as IT geeks at DNC?), or an outraged Bernie supporter with hacking skillz, did it. Come on. Look around. A lot of hardcore lefties are outraged at Hillary’s and the DNC’s underhanded and dirty treatment of their guy. That’s a much more straightforward explanation than Putin Did It!

There are other things that cut against the Putin theory. The reflexive attribution of Russian control to anything coming out of Wikileaks undermines the impact of the leak. If the Russians want to hurt Hillary, they would want to use an outlet that is not widely associated with them, if only to deprive Hillary and her flying monkeys and her tribe of acolytes of a way to discredit the leak–which is exactly what they are doing. The Russians aren’t stupid. They wouldn’t rely on an outlet that could be discredited precisely because of its alleged connection to them when there are many other ways of releasing the information. It would be in their interest to use a cutout that is not associated with them.

Further, if Russian hacking is so powerful (and I agree that it is), the DNC emails would not be the most damaging material. Hillary’s server material and Clinton Foundation emails would be far more damning.

As for Schindler’s argument that (unproven and implausible) Russian interference in US elections is beyond the pale: even if Russia is involved, influence by revealing facts is a different thing altogether from attempts to influence by manipulation, lies, disinformation, propaganda, or coercion. What the leak reveals is that the DNC actively manipulated the US primary elections in order to benefit Hillary: that kind of influence is more malign than influencing by making that fact known. Keeping the DNC’s and Hillary’s machinations secret would also influence upcoming presidential election. It’s better that our elections are influenced my more facts rather than less, and to argue that these facts should be ignored because of their (alleged) provenance is to commit two logical fallacies: ad hominem argument (reasoning/facts are judged based on the source) and appeal to motive (arguments/facts are to be judged based not on their logic/truth, but the motive of the party making the argument/presenting the facts).

The irony-and hypocrisy-of those rushing to pin this on Russia in order to distract attention from the content is also remarkable. Some (like Miriam Elder) have been big Wikileaks and Bradley Manning supporters in the past. Funny how alleged Russian manipulation of Wikileaks escaped their attention when Assange was leaking things that hurt their political opponents, but all of a sudden becomes THE STORY when one of theirs is targeted.

But the irony and hypocrisy don’t stop there. The DNC emails reveal that it used Russian tactics that today’s critics of the DNC data dump have assailed in the past: paying people to troll political opponents and their supporters on Twitter and elsewhere, and using employees to participate in Astroturf “demonstrations.”

And there’s more! The Attack the Messenger strategy is exactly the one that the Kremlin has employed in response to leaks about it. Putin’s spokesman Peskov tried to discredit the Panama Papers by claiming that they were a CIA information operation. Those attacking Wikileaks today went ballistic. How are they any different?

No. The medium is not the message, and attempts to make it so are discreditable and fallacious ways to distract attention from the real message in the DNC emails: namely, that the party, and its standard bearer, are corrupt, unethical slugs who have rigged the nomination process to save a wretched candidate who couldn’t win fair-and-square despite her huge advantages. Regardless of who turned over the rock to reveal that, it’s a good thing that the world can see them for what they are.

 

 

Print Friendly

For All You Pigeons: Musk Has Announced Master Plan II

Filed under: Climate Change,Commodities,Economics,Energy,Politics,Regulation — The Professor @ 11:29 am

Elon Musk just announced his “Master Plan, Part Deux,” AKA boob bait for geeks and posers.

It is just more visionary gasbaggery, and comes at a time when Musk is facing significant head winds: there is a connection here. What headwinds? The proposed Tesla acquisition of SolarCity was not greeted, shall we say, with universal and rapturous applause. To the contrary, the reaction was overwhelmingly negative, sometimes extremely so (present company included)–but the proposed tie up gave even some fanboyz cause to pause. Production problems continue; Tesla ended the resale price guarantee on the Model S (which strongly suggests financial strains); and the company has cut the price on the Model X SUV in the face of lackluster sales. But the biggest set back was the death of a Tesla driver while he was using the “Autopilot” feature, and the SEC’s announcement of an investigation of whether Tesla violated disclosure regulations by keeping the accident quiet until after it had completed its $1.6 billion secondary offering.

It is not a coincidence, comrades, that Musk tweeted that he was thinking of announcing his new “Master Plan” a few hours before the SEC made its announcement. Like all good con artists, Musk needed to distract from the impending bad news.

And that’s the reason for Master Plan II overall. All cons eventually produce cognitive dissonance in the pigeons, when reality clashes with the grandiose promises that the con man had made before. The typical way that the con artist responds is to entrance the pigeons with even more grandiose promises of future glory and riches. If that’s not what Elon is doing here, he’s giving a damn good impression of it.

All I can say is that if you are fool enough to fall for this, you deserve to be suckered, and look elsewhere for sympathy. Look here, and expect this.

As for the “Master Plan” itself, it makes plain that Musk fails to understand some fundamental economic principles that have been recognized since Adam Smith: specialization, division of labor, and gains from trade among specialists, most notably. A guy whose company cannot deliver on crucial aspects of Master Plan I, which Musk says “wasn’t all that complicated,” (most notably, production issues in a narrow line of vehicles), now says that his company will produce every type of vehicle. A guy whose promises about self-driving technology are under tremendous scrutiny promises vast fleets of autonomous vehicles. A guy whose company burns cash like crazy and which is now currently under serious financial strain (with indications that its current capital plans are unaffordable) provides no detail on how this grandiose expansion is going to be financed.

Further, Musk provides no reason to believe that even if each of the pieces of his vision for electric automobiles and autonomous vehicles is eventually realized, that it is efficient for a single company to do all of it. The purported production synergies between electricity generation (via solar), storage, and consumption (in the form of electric automobiles) are particularly unpersuasive.

But reality and economics aren’t the point. Keeping the pigeons’ dreams alive and fighting cognitive dissonance are.

Insofar as the SEC investigation goes, although my initial inclination was to say “it’s about time!” But the Autopilot accident silence is the least of Musk’s disclosure sins. He has a habit of making forward looking statements on Twitter and elsewhere that almost never pan out. The company’s accounting is a nightmare. I cannot think of another CEO who could get away with, and has gotten away with, such conduct in the past without attracting intense SEC scrutiny.

But Elon is a government golden boy, isn’t he? My interest in him started because he was–and is–a master rent seeker who is the beneficiary of massive government largesse (without which Tesla and SolarCity would have cratered long ago). In many ways, governments–notably the US government and the State of California–are his biggest pigeons.

And rather than ending, the government gravy train reckons to continue. Last week the White House announced that the government will provide $4.5 billion in loan guarantees for investments in electric vehicle charging stations. (If you can read the first paragraph of that statement without puking, you have a stronger stomach than I.) Now Tesla will not be the only beneficiary of this–it is a subsidy to all companies with electric vehicle plans–but it is one of the largest, and one of the neediest. One of Elon’s faded promises was to create a vast network of charging stations stretching from sea-to-sea. Per usual, the plan was announced with great fanfare, but the delivery has not met the plans. Also per usual, it takes forensic sleuthing worthy of Sherlock Holmes to figure out exactly how many stations have been rolled out and are in the works.

The rapid spread of the evil internal combustion engine was not impeded by a lack of gas stations: even in a much more primitive economy and a much more primitive financial system, gasoline retailing and wholesaling grew in parallel with the production of autos without government subsidy or central planning. Oil companies saw a profitable investment opportunity, and jumped on it.

Further, even if one argues that there are coordination problems and externalities that are impeding the expansion of charging networks (which I seriously doubt, but entertain to show that this does not necessitate subsidies), these can be addressed by private contract without subsidy. For instance, electric car producers can create a joint venture to invest in power stations. To the extent government has a role, it would be to take a rational approach to the antitrust aspects of such a venture.

So yet again, governments help enable Elon’s con. How long can it go on? With the support of government, and credulous investors, quite a while. But cracks are beginning to show, and it is precisely to paper over those cracks that Musk announced his new Master Plan.

Print Friendly

July 17, 2016

Antitrust to Attack Inequality? Fuggedaboutit: It’s Not Where the Money Is

Filed under: Economics,Politics,Regulation — The Professor @ 12:09 pm

There is a boomlet in economics and legal scholarship suggesting that increased market power has contributed to income inequality, and that this can be addressed through more aggressive antitrust enforcement. I find the diagnosis less than compelling, and the proposed treatment even less so.

A recent report by the President’s Council of Economic Advisors lays out a case that there is more concentration in the US economy, and insinuates that this has led to greater market power. The broad statistic cited in the report is the increase in the share of revenue earned by the top 50 firms in broad industry segments. This is almost comical. Fifty firms? Really? Also, a Herfindahl-Hirschman Index would be more appropriate. Furthermore, the industry sectors are broad  and correspond not at all to relevant markets–which is the appropriate standard (and the one embedded in antitrust law) for evaluating concentration and competition.

The report then mentions a few specific industries, namely hospitals and wireless carriers, in which HHIs have increased. Looking at a few industries is hardly a systematic approach.

Airlines is another industry that is widely cited as experiencing greater concentration, and which prices have increased with concentration. Given the facts that a major driver of concentration has been the bankruptcy or financial distress of major carriers, and that the industry’s distinctive cost characteristics (namely huge operational leverage and network structure) create substantial scale and network economies, it’s not at all clear whether the previous lower prices were long run equilibrium prices. So some of the price increases may result in super competitive prices, but some may just reflect that prices before were unsustainably low.

Looking over the discussion of these issues gives me flashbacks. There is a paleo industrial organization (“PalIO”?) feel to it. It harkens back to the ancient Structure-Conduct-Performance paradigm that was a thing in the 50s-70s. Implicit in the current discussion is the old SCP (LOL–that’s the closest I come to being associated with this view) idea that there is a causal connection between industry structure and market power. More concentrated markets are less competitive, and firms in such more concentrated, less competitive markets are more profitable. Those arguing that greater concentration increases income inequality go from this belief to their conclusion by claiming that the increased market power rents flow disproportionately to higher income/wealth individuals.

The PalIO view was challenged, and largely demolished, in the 70s and 80s, primarily by the Chicago School, which demonstrated alternative non-market power mechanisms that could give rise to correlations (in the cross-section and time series) between concentration and profitability. For instance, firms experiencing favorable “technology” shocks (which could encompass product or process innovations, organizational innovations, or superior management) will expand at the expense of firms not experiencing such shocks, and will be infra marginal and more profitable.

This alternative view forces one to ask why concentration has changed. Implicit in the position of those advocating more aggressive antitrust enforcement is the belief that firms have merged to exploit market power, and that lax antitrust enforcement has facilitated this.

But there are plausibly very different drivers of increased concentration. One is network and information effects, which tend to create economies of scale and result in larger firms and more concentrated markets. Yes, these effects may also give the dominant firms that benefit from the network/information economies market power, and they may charge super competitive prices, but these kinds of industries and firms pose thorny challenges to antitrust. First, since monopolization per se is not an antitrust violation, a Google can become dominant without merger or without collusion, leaving antitrust authorities to nip at the margins (e.g., attacking alleged favoritism in searches). Second, conventional antitrust remedies, such as breaking up dominant firms, may reduce market power, but sacrifice scale efficiencies: this is especially likely to be true in network/information industries.

The CEA report provides some indirect evidence of this. It notes that the distribution of firm profits has become notably more skewed in recent years. If you look at the chart, you will notice that the return on invested capital excluding goodwill for the 90th percentile of firms shot up starting in the late-90s. This is exactly the time the Internet economy took off. This resulted in the rise of some dominant firms with relatively low investments in physical capital. More concentration, more profitability, but driven by a technological shock rather than merger for monopoly.

Another plausible driver of increased concentration in some markets is regulation. Hospitals are often cited as examples of how lax merger policy has led to increased concentration and increased prices. But given the dominant role of the government as a purchaser of hospital services and a regulator of medical markets, whether merger is in part an economizing response to dealing with a dominant customer deserves some attention.

Another industry that has become more concentrated is banking. The implicit and explicit government support for too big to fail enterprises has obviously played a role in this. Furthermore, extensive government regulation of banking, especially post-Crisis, imposes substantial fixed costs on banks. These fixed costs create scale economies that lead to greater scale and concentration. Further, regulation can also serve as an entry barrier.

The fixed-cost-of-regulation (interpreted broadly as the cost of responding to government intervention) is a ubiquitous phenomenon. No discussion of the rise of concentration should be complete without it. But it largely is, despite the fact that it has long been known that rent seeking firms secure regulations for their private benefit, and to the detriment of competition.

The CEA study mentions increased concentration in the railroad industry since the mid-80s. But this is another industry that is subject to substantial network economies, and the rise in concentration from that date in particular reflects an artifact of regulation: before the Staggers Act deregulated rail in 1980, that industry was inefficiently fragmented due to regulation. It was also a financial basket case. Much of the increased concentration reflects an efficiency-enhancing rationalization of an industry that was almost wrecked by regulation. Some segments of the rail market have likely seen increased market power, but most segments are subject to competition from non-rail transport (e.g., trucking, ocean shipping, or even pipelines that permit natural gas to compete with coal).

Another example of how regulation can increase concentration and reduce concentration in relevant markets: EPA regulations of gasoline. The intricate regional and seasonal variations in gasoline blend standards means that there is not a single market for gasoline in the United States: fuel that meets EPA standards for one market at one time of year can’t be supplied to another market at another time because it doesn’t meet the requirements there and then. This creates balkanized refinery markets, which given the large scale economies of refining, tend to be highly concentrated.

Reviewing this makes plain that as in so many things, what we are seeing in the advocacy of more aggressive antitrust is the prescription of treatments based on a woefully incomplete understanding of causes.

There is also an element of political trendiness here. Inequality is a major subject of debate at present, and everyone has their favorite diagnosis and preferred treatment. This has an element of using the focus on inequality to advance other agendas.

Even if one grants the underlying PalIO concentration-monopoly profit premise, however, antitrust is likely to be an extremely ineffectual means of reducing income inequality.

For one thing, there is no good evidence on how market power rents are distributed. The presumption is that they go to CEOs and shareholders. The evidence behind the first presumption is weak, at best, and some evidence cuts the other way. Moreover, it is also the case that some market power rents are not distributed to shareholders, but accrue to other stakeholders within firms, including labor.

Moreover, the numbers just don’t work out. In 2015, after-tax corporate income represented only about 10 percent of US national income. Market power rents represented only a fraction of those corporate profits. Market power rents that could be affected by more rigorous antitrust enforcement represented only a fraction–and likely a small fraction–of total corporate profits. If we are talking about 1 percent of US income the distribution of which could be affected by antitrust enforcement, I would be amazed. I wouldn’t be surprised if its an order of magnitude less than that.

With respect to how much of corporate income could be affected by antitrust policy, it’s worthwhile to consider a point mentioned earlier, and which the CEA raised: the distribution of corporate profits is very skewed. Further, if you look at the data more closely, very little of the big corporate profits could be affected by more rigorous antitrust–in particular, more aggressive approaches to mergers.

In 2015, 28 firms earned 50 percent of the earnings of all S&P500 firms. Apple alone earned 6.7 percent of the collective earnings of the S&P500. Many of the other firms represented in this list (Google, Microsoft, Oracle, Intel) are firms that have grown from network effects or intellectual capital rather than through merger for market power. They became big in sectors where the competitive process favors winner-take-most. It’s also hard to see how antitrust matters for other firms, Walt Disney for instance.

Only three industries have multiple firms on the list. Banking is one, and I’ve already discussed that: yes, it has grown through merger, but regulation and government are major drivers of that. There have also efficiency gains from consolidating an industry that regulation historically made horrifically inefficiently fragmented, though where current scale is relative to efficient scale is a matter of intense debate.

Another is airlines. Again, given the route network-driven scale economies, and the previous financial travails of the industry, it’s not clear how much market power rents the industry is generating, and whether antitrust could reduce those rents without imposing substantial inefficiencies.

Automobiles is on the list. But the automobile industry is now far less concentrated than it used to be in the days of the Big Three, and highly competitive.  Oil is represented on the list by one company: ExxonMobil. Crude and gas production is not highly concentrated, when one looks at the relevant market–which is the world. This is another industry which has seen a decline in dominance by major firms over the years.

Looking over this list, it is difficult to find large dollars that could even potentially be redistributed via antitrust. And given that this list represents a very large fraction of corporate profits, the potential impact of antitrust on income distribution is likely to be trivial.

(As an exercise for interested readers: calculate industry profits by a fairly granular level of disaggregation by NAICS code, and see which ones have become more concentrated as a result of merger in recent years.)

In sum, if you want to ameliorate inequality, I would put antitrust on the bottom of your list. It’s not where the money is because the kind of market power that antitrust could even conceivably address accounts for a  small portion of profits, which in turn account for a modest percentage of national income. Market power changes in many profitable industries have almost certainly been driven by major technological changes, and antitrust could reduce them only by gutting the efficiency gains produced by these changes.

 

Print Friendly

Next Page »

Powered by WordPress