Streetwise Professor

July 29, 2021

Timmy!’s Back!

Former Treasury Secretary Timothy Geithner–better known as Timmy! to loooooongtime readers of this blog–is back, this time as Chair of the Group of 30 Working Group on Treasury Market Liquidity. The Working Group was tasked with addressing periodic seizures in the Treasury securities market, most notoriously during the onset of the Covid crisis in March 2020–something I wrote about here.

This is a tale of two reports: the diagnosis is spot on, the prescription pathetic.

The report recognizes that

the root cause of the increasing frequency of episodes of Treasury market dysfunction under stress is that the
aggregate amount of capital allocated to market-making by bank-affiliated dealers has not kept pace with the very rapid growth of marketable Treasury debt outstanding

In other words, supply of bank market making services has declined, and demand for market making services has gone up. What could go wrong, right?

Moreover, the report recognizes the supply side root cause of the root cause: post-Financial Crisis regulations, and in particular the Supplemental Leverage Ratio, or SLR:

Post-global financial crisis reforms have ensured that banks have adequate capital, even under stress, but certain provisions may be discouraging market-making in U.S. Treasury securities and Treasury repos, both in normal times and especially under stress. The most significant of those provisions is the Basel III leverage ratio, which in theUnited States is called the Supplementary Leverage Ratio (SLR) because all banks in the United States (not just internationally active banks) are subject to an additional “Tier 1”leverage ratio.

Obviously fiscal diarrhea has caused a flood of Treasury issuance that from time to time clogs the Treasury market plumbing, but that’s not something the plumber can fix. The plumber can put in bigger pipes, so of course the report recommends wholesale changes in the constraints on market making, the SLR in particular, right? Right?

Not really. Recommendation 6–SIX, mind you–is “think about doing something about SLR sometime”:

Banking regulators should review how market intermediation is treated in existing regulation, with a view to identifying provisions that could be modified to avoid disincentivizing market intermediation, without weakening overall resilience of the banking system. In particular, U.S. banking regulators should take steps to ensure that risk-insensitive leverage ratios function as backstops to risk-based capital requirements rather than constraints that bind frequently.

Wow. That’s sure a stirring call to action! Review with a view to. Like Scarlett O’Hara.

Rather than addressing either of what itself acknowledges are the two primary problems, the report recommends . . . wait for it . . . more central clearing of the Treasury market. Timothy Geithner, man with a hammer, looking for nails.

Clearing cash Treasuries will almost certainly have a trivial effect on market making capacity. The settlement cycle in Treasuries is already one day–something that is aspirational (don’t ask me why) in the stock market. That already limits significantly the counterparty credit risk in the market (and it’s not clear that counterparty credit risk is a serious impediment on market making, especially since it existed before the recent dislocations in the Treasury market, and therefore is unlikely to have been a major contributor to them).

The report recognizes this: “Counterparty credit risks on trades in U.S. Treasury securities are not as large as those in other U.S. financial markets, because the contractual settlement cycle for U.S. Treasury securities is shorter (usually one day) and Treasury security prices generally are less volatile than other securities prices.” Geithner (and most of the rest of the policymaking establishment) were wrong about clearing being a panacea in the swap markets: it’s far less likely to make a material difference in the market for cash Treasuries.

The failure to learn over the past decade plus is clear (no pun intended!) from the report’s list of supposed benefits of clearing, which include

reduction of counterparty credit and liquidity risks through netting of counterparty exposures and application of margin requirements and other risk mitigants, the creation of additional market-making capacity at all dealers as a result of recognition of the reduction of exposures achieved though multilateral netting

As I wrote extensively in 2008 and the years following, netting does not reduce counterparty credit risk or exposures: it reallocates them. Moreover, as I’ve also been on about for more than a fifth of my adult life (and I’m not young!), “margin requirements” create their own problems. In particular, as the report notes, as is the case in most crises the March 2020 Treasury crisis sparked a liquidity crisis–liquidity not in terms of the depth of Treasury markets (though that was an issue) but liquidity in terms of a large increase in the demand for cash. Margin requirements would likely exacerbate that, although the incremental effect is hard to determine given that existing bilateral exposures may be margined (something the report does not discuss). As seen in the GameStop fiasco, a big increase in margins in part driven by the central counterparty (ironically the DTCC, the parent of the FICC which the report wants to be the clearinghouse for its expanded clearing of Treasuries) was a major cause of disruptions. For the report to ignore altogether this issue is inexcusable.

Relatedly, the report touches only briefly on the role of basis trades in the events of March 2020. As I showed in the article linked above, these were a major contributor to the dislocations. And why? Precisely because of margin calls on futures.

Thus, the report fails to analyze completely its main recommendation, and in fact its recommendation is based on not just an incomplete but a faulty understanding of the implications of clearing (notably its mistaken beliefs about the benefits of netting). That is, just like in the aftermath of 2008, supposed solutions to systemic risk are based on decidedly non-systemic analyses.

Instead, shrinking from the core issue, the report focuses on a peripheral issue, and does not analyze that properly. Clearing! Yeah, that’s the ticket! Good for whatever ails ya!

In sum, meet the new Timmy! Same as the old Timmy!

Print Friendly, PDF & Email

June 29, 2021

Betting on Time Inconsistency: Glencore Will Profit When Reality Intrudes on Renewables Reveries

Filed under: Climate Change,Economics,Energy,Politics,Regulation — cpirrong @ 6:01 pm

In his swan song at Glencore, the soon to retire Ivan Glasenberg doubled down on coal:

In what’s likely to be the final deal announced by outgoing Chief Executive Officer Ivan Glasenberg, Glencore agreed to buy stakes owned by BHP Group and Anglo American Plc in the Cerrejon thermal coal mine for about $588 million, subject to purchase price adjustments.

Glencore is filling a void left by two mining giants:

The sale completes Anglo’s retreat from thermal coal and extends similar efforts by BHP, amid investor pressure. However, Glencore has committed to run its coal mines for another 30 years, potentially allowing it to profit as rivals retreat. It’s already the biggest shipper of the fuel, and gaining full control of Cerrejon gives the company even more exposure just as prices trade at the highest level in years, buoyed by strong demand as the global economy rebounds.

In my opinion, this is a very canny contrarian bet. The panicked flight from coal by the Anglos and BHPs and others of the world is directly attributable to political and policy pressure. Hydrocarbons bad. Renewables good. Hydrocarbon companies are evil. You will be punished you carbon spewing bastards! Your CEOs will be snubbed by righteous people. Oh Noes!

But these policies are predicated on a collective delusion about renewables. Bloomberg can preach all it wants about how renewables are as efficient as conventional generation, but the fact is and will remain that dispatchable, reliable, continuous conventional generation, producing power from cheaply stored chemical energy, will remain much cheaper that non-dispatchable, intermittent, unreliable renewables that will have to rely on expensive battery storage. Bloomberg’s “levelized cost” metric is total bullshit because it leaves out all of the costs associated with reliability, transmission, and intermittency–details, details!

Renewables will never be able to handle current electricity demand at reasonable cost, but policymakers in the grip of the delusion are adding to electricity demand by forcing the electrification of other energy consumption, including transportation and home heating and cooking.

And it is almost certain that Glasenberg recognizes these delusions for what they are, and knows that in five to ten years time reality will rear its ugly head–recognition of reality can be postponed, but not forever. And Glasenberg recognizes when that reckoning comes, and electricity costs spike and reliability plunges, countries around the world will come begging for dependable electricity sources. And thus, they will come begging to Glencore for its coal.

The payoff will be all the bigger because Anglo, BHP, and others will not invest, leaving a capacity void. Price will rise to ration the limited supply.

Current government energy policies around the world are not time consistent. Political coercion to achieve a utopian outcome will result in more costly and less reliable energy that will not be politically sustainable. Ivan Glasenberg recognizes that time inconsistency, and as his parting gift to Glencore’s shareholders–and the world, frankly, when it comes to his senses–is an investment that will pay off handsomely when reality intrudes on renewables reveries.

Print Friendly, PDF & Email

June 23, 2021

I Never Did Acid in the 70s, But I’m Experiencing Flashbacks Anyways

Filed under: Commodities,Economics,Financial crisis,Politics,Regulation — cpirrong @ 7:18 pm

I grew up in the 70s. I never did acid then (or ever!), but man am I experiencing flashbacks. Feckless progressive Democrat presidents. (Though Carter, while an idiot, was at least compos mentis, which is more than can be said of Señor Senile Joe Biden.) Crime. (I’m betting on a comeback of the Charles Bronson revenge and Clint Eastwood Dirty Harry genres.) All in all, the 70s sucked, and I am not nostalgically hoping for a reprise–I’m dreading it actually.

One of the things that sucked worst was inflation. The 1970s were the inflation decade (although it peaked in 1980-1981). In recent months, the price level measured by the CPI, PPI, and GDP deflator has been up substantially. CPI, for example, is up about 4.5 percent on a year-on-year basis. This has raised concerns about a return of 70s-style inflation. Are these concerns justified?

The jury is out, but there is reason for concern.

First, it is important to distinguish between one time changes in the price level and inflation. Inflation is a long term upward trend in the price level, rather than a single stair-step jump in the price level.

The impact of the pandemic (or, more accurately, the draconian policy response to the pandemic) has created the conditions for a one-time step up in the price level. The economic recovery from the pandemic is a positive aggregate demand shock. Moreover, it has occurred against the backdrop of constrained supply conditions that resulted from the pandemic. Upward shifts in supply and demand lead to a higher price level, ceteris paribus.

One would think that these are effectively one-time shocks–hopefully the pandemic is a one-time thing, and therefore the recovery from it is too. Furthermore, supply conditions should ease. (We are already seeing that in some sectors, such as lumber, though not in others, such as semiconductors. Policy, namely paying people not to work in some states, may impede the easing of supply conditions). Thus, one would expect that this is one time, and at least partially transitory, jump in the price level rather than inflation qua inflation.

That said, there are reasons for concern. Most notably, the fiscal diarrhea in the US, and the willingness of the Fed to finance (i.e., monetize) that spending is freighted with inflationary potential.

In the post-Financial Crisis era, the Fed mitigated the inflationary impact of QE and other expansive monetary policies by paying interest on reserves. So the inflationary threat that I worried about in 2009 (and asked Ben Bernanke about) never materialized. But that’s no reason for complacency. We dodged a bullet once, but that doesn’t mean we will always do so. Massive deficit spending accommodated by the monetary authority is highly likely to result in inflation, sooner or later. (I am inclined to favor Thomas Sargent’s fiscal theory of the price level.).

Part of the reason that inflation didn’t occur post-2008 was that money velocity plunged. Part of this was due to the Fed paying interest on reserves, which led banks to hold them (lend them to the Fed in effect) rather than lend them to private individuals and firms. But expectations, and the self-fulfilling nature thereof with respect to inflation, likely played a role too. In the gloomy aftermath of 2008 people expected low inflation (or even deflation), which made them more willing to hold rather than spend money balances–which results in low inflation, thereby validating the expectations and perpetuating the equilibrium.

But expectations are fickle things, and as a result there can be multiple equilibria. Fed board members have strenuously argued that the recent spurt in prices is a one-time stair step phenomenon, not the harbinger of inflation. But if the spurt results in an upward shift in inflationary expectations by the hoi polloi, people will be less willing to hold money balances at the existing price level, so they will try to reduce (i.e., spend) them, which leads to inflation–thereby validating the expectations.

Thus, it’s not so much what the Fed believe that matters. It’s about what you and me and other individuals and firms believe. Combine a negative fiscal picture with a surge in prices and it’s quite possible that inflation expectations soon will no longer be “anchored” at low levels, but will surge to higher levels, which would result in inflation no longer being anchored at low levels.

So although I think that the recent surge in the price level is of the one-time variety, that doesn’t mean everyone will think the same way. And if everyone doesn’t think the same way we may see a 70s rerun. The dire fiscal picture contributes to such worries.

When the subject of inflation comes up, as Dr. Commodities I’m often asked whether commodities are a good hedge. Intuitively it makes sense that they should be, but historically, they have not been. Commodity prices are much more volatile than the price level, and not that highly correlated. That is, relative prices move around a lot even when the price level trends upwards.

I think that availability bias is a big reason why people focus on commodity prices–they are readily observable, on a second-by-second basis, because they are actively traded on liquid markets. Other goods and services, not so much. But just because we can see them easily doesn’t mean that they are reliable beacons for the price level overall, or changes therein.

This brings to mind why we should really fear a return of 70s-style inflation (or worse, heaven forfend).

When sitting in (the great) Sherwin Rosen’s Econ 302 course at Chicago on a cold morning in February, 1982, I was startled when Sherwin’s normal rather droning delivery was interrupted by him shouting and pounding his right fist into his left palm: “And that’s the problem with inflation. IT FUCKS UP RELATIVE PRICES!!!!”

Some prices are stickier than others, meaning that inflation pressures can impact some goods and services more and sooner than others–thereby causing changes in relative prices.

This is a bad thing–and why Sherwin dropped the F-bomb about it–because relative prices guide resource allocation. If you fuck up relative prices, as inflation does, you interfere with resource allocation, leading to lower incomes and growth. Inflation has adverse real consequences.

So we should definitely fear an acid flashback to 70s inflation. And although I do not believe the recent surge in prices is a harbinger thereof, I think that there is a material risk that we may all experience such a flashback–even if you didn’t grow up in the 70s.

Print Friendly, PDF & Email

June 10, 2021

Bad Day At BlackRock?

Filed under: Economics,Financial crisis,Politics,Regulation — cpirrong @ 6:16 pm

There has been something of a kerfuffle recently over the large scale purchases of single family homes by the likes of BlackRock and other institutional investors like pension funds. The criticism is somewhat redolent of the Occupy days, because it unites many on the left with some on the populist right, like J.D. Vance:

Understanding should come before judgment. So let’s try to figure out what is going on here. I don’t have a definitive answer, but my strong sense is that this phenomenon is ultimately a consequence of the 2008-2009 Financial Crisis, and the various policy responses to it.

One thing is clear is that the initial foray of institutional investors was a response to the Crisis. And no wonder. Massive amounts of single family homes were in foreclosure, and the biggest fire sale in American real estate history was underway. And in fire sales, those with “dry powder”–cash rich investors relatively undamaged by the crisis that sparks the sales–go bargain hunting. In 2009-2010, the bargains were in residential real estate, especially single family houses. And the “real money” investors like BlackRock and pension funds were best positioned to grab those bargains.

Here it is almost certain that the activities of BlackRock et al did elevate real estate prices. And a good thing, too, for the problem at the time was not that housing prices were too high, but too low. Without bargain hunters (or vultures, if you wish) housing prices would have been even lower, more homeowners would have been underwater, more of them would have been foreclosed, etc. Of course BlackRock et al were not doing this out of charity, but to make a buck. But they were responding to price signals and their actions almost certainly mitigated a horrible situation.

But as the WSJ article linked above notes, institutional investment in the housing sector has persisted after the fire sales ended–especially in places like Houston, Atlanta, and Nashville. This is characterized as a reach for yield strategy on the part of the institutional investors. The yield on rental property is apparently attractive relative to alternative investments. And no surprise: have you looked at bond yields recently? Like in the last 12 years? Is it any wonder that investors like pension funds (especially government funds that are hugely under water) are desperate for assets that generate a stream of cash flows at attractive rates?

But high yield suggests that prices are low in some sense, rather than high. (Price is in the denominator of the return calculation.) “Bubble” real estate markets are characterized by extremely low rental yields, not high ones.

Look at this another way. People are choosing to pay rent, rather than buy and make mortgage payments and forego income on the investment of a down payment amount. Why? Why are they paying rents that generate a high return for the housing owner, rather than buying homes and capturing that return themselves?

My answers will be somewhat speculative, but now the question is the important thing. Many individuals are choosing not to buy, and to pay rent instead. The rents that they are willing to pay are driven by the stream of benefits that they get from living in a single family home. Why don’t they outbid BlackRock or some state pension fund and pay a price that capitalizes that stream of benefits?

Note that there are clear advantages to occupiers owning. The Atlantic article linked earlier discusses the frictions associated with renting. Well, renter-landlord relations have been fraught always and everywhere. Rental contracts are not “complete”–they leave a lot of grey areas that give rise to conflict between owner and renter, and to opportunism by both. Those wasteful activities can be eliminated by having those who live in a home own it. That in and of itself should give individuals a bidding advantage over institutions when buying homes. Cut out the middleman and you cut out the transaction costs inherent in the landlord-tenant relationship.

So then what gives? Now for the speculation, which again revolves around the fallout from the Financial Crisis.

First, the leading diagnosis of the cause of the Financial Crisis was that it was too easy to get a mortgage. In response to this, post-Crisis legislation and regulation tightened up the home financing market. A lot. You can argue that the tightening was justified. You can argue that it went too far. But regardless, restrictions on the ability of individuals to finance a home purchase, or regulations that made it more expensive to do this, shifted the balance away from purchasing towards renting.

Indeed, if the likes of Elizabeth Warren were intellectually consistent (yeah I’m a comedian, I know), they should see the increased presence of Wall Street on Elm Street as a good thing, because it means that their endeavors to prevent another housing “bubble” have worked.

Second, the Financial Crisis took a severe toll on the balance sheets and creditworthiness of many individuals. Although these problems have dissipated, they haven’t disappeared. Combined with the more restrictive access to credit, these creditworthiness/balance sheet effects impede the ability of individuals to capture the high returns of home ownership, and they cannot compete on price with institutional investors who do not face such impediments.

Third–and this is perhaps the most speculative point of all–the Financial Crisis and the follow on Foreclosure Crisis arguably had an impact on the preferences of individuals, especially Millennials and Gen-Zs. Post-Crisis home ownership seemed less like a dream–it had a potential dark side. So many in those cohorts prefer to pay rent and give a high return to institutional investors and deal with the hassles of a landlord rather than buy and face the risk of financial ruin.

Fed policy may also play a role. It clearly has depressed returns on conventional fixed income investments–and has done so by design. That has made institutional investors look at non-traditional investments. But Fed policy alone can’t explain why yields on housing investments apparently haven’t fallen to the level of the low yields on bonds. There must be some other factor impeding the rise of housing prices to reduce the yields that the institutional investors are apparently capturing by buying and renting out single family homes. That brings us back to a search for factors (like those just discussed) that prevent individuals from outbidding institutional investors to capture the stream of returns from housing ownership (and to eliminate the costs that arise when the home occupier is not the owner).

In turn, this means that inquiry into this issue should focus on whether post-Crisis, there are excessive restrictions and costs imposed on individuals looking to finance home purchases. That is, are the post-2008 laws and regulations designed to prevent a recurrence of the housing boom too restrictive?

I don’t have an answer to that question, but again, posing the right question is where you have to start.

My provisional conclusion now is that institutional investors are doing what they do: responding to price signals in order to maximize risk adjusted returns. They are responding to incentives. To evaluate what is going on, it is necessary to evaluate whether those incentives have been distorted by ill-conceived policies.

Of course, these policies were not created in a vacuum. They are the result of a political process that includes lobbying and rent seeking by institutional investors, among others. They have an incentive to harm potential competitors in the housing market. So any inquiry should also focus on whether these institutional investors have helped rig the game against individuals by pressing for the imposition of unwarranted restrictions on home financing. If so, censorious judgment would be warranted.

So is burgeoning institutional ownership of single family housing a 2020s version of Bad Day at Black Rock? A 2020s film noir? I don’t know. But I have the questions and some provisional answers.

Print Friendly, PDF & Email

June 9, 2021

GiGi’s Back!: plus ça change, plus c’est la même chose

Filed under: Clearing,Economics,Exchanges,HFT,Regulation — cpirrong @ 2:45 pm

One of the few compensations I get from a Biden administration is that I have an opportunity to kick around Gary Gensler–“GiGi” to those in the know–again. Apparently feeling his way in his first few months as Chairman of the SEC, Gensler has been relatively quiet, but today he unburdened himself with deep thoughts about stock market structure. If you didn’t notice, “deep” was sarcasm. His opinions are actually trite and shallow, and betray a failure to ask penetrating questions. Plus ça change, plus c’est la même chose.

Not that he doesn’t have questions. About payment for order flow (“PFOF”) for instance:

Payment for order flow raises a number of important questions. Do broker-dealers have inherent conflicts of interest? If so, are customers getting best execution in the context of that conflict? Are broker-dealers incentivized to encourage customers to trade more frequently than is in those customers’ best interest?

But he misses the big question: why is payment for order flow such a big deal in the first place?

Relatedly, Gensler expresses concern about what traders do in the dark:

First, as evidenced in January, nearly half of the trading interest in the equity market either is in dark pools or is internalized by wholesalers. Dark pools and wholesalers are not reflected in the NBBO. Moreover, the NBBO is also only as good as the market itself. Thus, under the segmentation of the current market, nearly half of trading along with a significant portion of retail market orders happens away from the lit markets. I believe this may affect the width of the bid-ask spread.

Which begs the question: why is “nearly half of the trading interest in the equity market either is in dark pools or is internalized by wholesalers”?

Until you answer these big questions, studying the ancillary ones like his regarding PFOF an NBBO is a waste of time.

The economics are actually very straightforward. In competitive markets, customers who impose different costs on suppliers will pay different prices. This is “price discrimination” of a sort, but not price discrimination based on an exploitation of market power and differences in customer demand elasticities: it is price differentiation based on differences is cost.

Retail order flow is cheaper to intermediate than institutional order flow. Some institutional order flow is cheaper to intermediate than other such flows. Competitive pressures will find ways to ensure flows that are cheaper to intermediate pay lower prices. PFOF, dark pools, etc., are all means of segmenting order flow based on cost.

Trying to restrict cost-based price differences by banning or restricting certain practices will lead clever intermediaries to find other ways to differentiate based on cost. This has always been so, since time immemorial.

In essence, Gensler and many other critics of US market structure want to impose uniform pricing that doesn’t reflect cost differences. This would be, in essence, a massive scheme of cross subsidies. Ironically, the retail traders for whom Gensler exhibits such touching concern would actually be the losers here.

Cross subsidy schemes are inherently unstable. There are tremendous competitive pressures to circumvent them. As the history of virtually every regulated sector (e.g., transportation, communications) has demonstrated for decades, and even centuries.

From a positive political economy perspective, the appeal of such cross subsidy schemes to regulators is great. As Sam Peltzman pointed out in his amazing 1976 JLE piece “Toward a More General Theory of Regulation,” regulators systematically attempt to suppress cost-based price differences in order to redistribute rents to gain political support. The main impetus for deregulation is innovation that exploits gains from trade from circumventing cross subsidy schemes–deregulation in banking (Regulation Q) and telecoms are great examples of this.

So who would the beneficiaries of this cross-subsidization scheme be? Two major SEC constituencies–exchanges, and large institutional traders.

In other words, all this chin pulling about PFOF and dark markets is politics as usual. Furthermore, it is politics as usual in the cynical sense that the supposed beneficiaries of regulatory concern (retail traders) are the ones who will be shtupped.

Gensler also expressed dismay at the concentration in the PFOF market: yeah, he’s looking at you, Kenneth. Getting the frequency?

Although Gensler’s systemic risk concern might have some justification, he still fails to ask the foundational question: why is it concentrated? He doesn’t ask, so he doesn’t answer, instead saying: “Market concentration can deter healthy competition and limit innovation.”

Well, concentration can also be the result of healthy competition and innovation (h/t the great Harold Demsetz). Until we understand the existing concentration we can’t understand whether it’s a bug or feature, and hence what the appropriate policy response is.

Gensler implicitly analogizes say Citadel to Facebook or Google, which harvest customer data and can exploit network effects which drives concentration. The analogy seems very strained here. Retail order flow is cheap to service because it is uninformed. Citadel (or other purchasers of order flow) isn’t learning something about consumers that it can use to target ads at them or the like. The main thing it is learning is what sources of order flow are uninformed, and which are informed–so it can avoid paying to service the latter.

Again, before plunging ahead, it’s best to understand what are the potential agglomeration economies of servicing order flow.

Gensler returns to one of his favorite subjects–clearing–at the end of his talk. He advocates reducing settlement time from T+2: “I believe shortening the standard settlement cycle could reduce costs and risks in our markets.”

This is a conventional–and superficial–view that suggests that when it comes to clearing, Gensler is like the Bourbons: he’s learned nothing, and forgotten nothing.

As I wrote at the peak of the GameStop frenzy (which may repeat with AMC or some other meme stock), shortening the settlement cycle involves serious trade-offs. Moreover, it is by no means clear that it would reduce costs or reduce risks. The main impact would be to shift costs, and transform risks in ways that are not necessarily beneficial. Again, shortening the settlement cycle involves a substitution of liquidity risk for credit risk–just as central clearing does generally, a point which Gensler was clueless about in 2010 and is evidently equally clueless about a decade later.

So GiGi hasn’t really changed. He is sill offering nostrums based on superficial diagnoses. He fails to ask the most fundamental questions–the Chesterton’s Fence questions. That is, understand why things are they way they are before proposing to change them.

Print Friendly, PDF & Email

April 24, 2021

Why Is Proof of Efficacy Required for Pharmaceutical Interventions, But NOT Non-Pharmaceutical Ones?

Filed under: China,CoronaCrisis,Economics,Politics,Regulation — cpirrong @ 11:43 am

Under Federal law, a pharmaceutical intervention must be proven safe and effective before it is marketed to the public. If after introduction it proves unsafe or ineffective, the Food and Drug Administration can rescind its approval.

Note the burden of proof: the manufacturer must prove safety and efficacy. Safety and efficacy are not rebuttable presumptions.

Would the same be true of non-pharmaceutical interventions (NPIs). This neologism (neoanacronym?) is used to describe the policies that have been imposed during the Covid Era–most particularly, lockdowns and masks.

Neither had been proven safe or effective prior to their wholesale–and I daresay, indiscriminate–use. Lockdowns in particular had never been subjected to any clinical experiment or trial. Indeed, the idea had been evaluated by epidemiologists and others, and soundly rejected. But a policy first introduced in a police state–China–spread just as rapidly as the virus to supposedly non-police states despite it never having been proven efficacious or safe.

A year’s experience has produced the evidence. Greetings, fellow lab rats!

And the evidence shows decisively that lockdowns are NOT effective at affecting any medically meaningful metric about Covid. This American Institute of Economic Research piece provides an overview of the evidence through December: subsequent studies have provided additional evidence.

Furthermore, lockdowns have been proven to be unsafe. Unsafe to incomes, especially for those whose jobs do not permit working from home. Unsafe for physical health, in the form of inter alia deferred cancer diagnoses and treatment for heart attacks and strokes and greater substance abuse (with higher incidence of overdoses), as well as delayed “elective” surgeries that improve life quality. Unsafe for mental health. Unsafe for children, in particular, who have experienced debilitating social isolation and profound disruption in their educations. (Although given the trajectory of American public education, especially post-George Floyd/Derek Chauvin, feral children might be better off than those subjected to the tortures of a CRT-infused curriculum and CRTKoolAid drinking “educators.”)

Masks are not as devastating as lockdowns, but they have also been shown to be ineffective and also unsafe, especially for those who must wear them for extended stretches–which includes in particular children at school.

(Remember “For the children”? Ah, good times. Good times.)

Drug regulation was one of the first major initiatives of the Progressive Era, and the 1962 FDA Amendments that imposed the efficacy requirement were also driven by progressives. My assessment of the economic evidence (especially the literature spawned by my thesis advisor, the great Sam Peltzman) is that the efficacy requirement in particular has been harmful, on net, because it delayed and in some cases prevented the introduction of beneficial therapies.

But even if–especially if–you accept the progressive-inspired conventional wisdom regarding pharmaceutical intervention regulation, you should be dismayed and even furious that the same logic that has NOT been applied to NPIs. The underlying principle of drug regulation has been “show me”: show me something works. The underlying principle of Covid Era ukases has been: “Evidence? Evidence? I don’t have to show any stinkin’ evidence.” Indeed, it’s been worse than that: those who demand evidence, or even politely point out the lack of evidence, are branded as heretics by the very same “progressives” who believe religiously that requiring proof of efficacy of drugs is a good thing.

How to square this circle? How to explain this seeming contradiction?

I think it is as plain as the nose on your face. Power. In particular, power exercised by progressive technocratic elites. The FDA acts empower a progressive technocratic elite. Lockdowns and mask mandates empower a progressive technocratic elite–far beyond the wildest dreams of the most zealous FDA bureaucrat. (They also empower idiot politicians who imagine themselves to be part of some elite.) They are both premised on the belief that individuals are incompetent to choose wisely, and must be coerced into making the right choice. Coerced by credentialed elites who are better than you proles.

So an apparent logical inconsistency–proof of efficacy for thee, but not for me–is in fact no inconsistency at all. They are both who, whom. A soi disant elite (ha!) always pushes the alternative that gives them the most power, and deprives you of the most choice. Who (the progressives): Whom (you).

Print Friendly, PDF & Email

April 5, 2021

Justice Thomas Echoes SWP, But Alas Our Proposals Regarding Tech Companies Are Futile In Today’s Corporatist State

Filed under: Economics,Politics,Regulation — cpirrong @ 7:17 pm

Over four years ago, to address social media platforms’ exclusion on the basis of viewpoint (i.e., censorship) I advocated treating them as common carriers subject to a non-discrimination requirement. The thrust of my argument was that these platforms have substantial market power and are subject to weak competitive discipline due to network effects and other technological factors.

In concurring with a Supreme Court decision to deny cert in a case that found Donald Trump violated First Amendment rights by blocking users on Twitter, Justice Clarence Thomas came out strongly in favor of the common carrier approach to regulating Twitter, Facebook, and Google.

Justice Thomas’ reasoning follows mine quite closely:

It changes nothing that these platforms are not the sole means for distributing speech or information. A person always could choose to avoid the toll bridge or train and instead swim the Charles River or hike the Oregon Trail. But in assessing whether a company exercises substantial market power, what matters is whether the alternatives are comparable. For many of today’s digital platforms, nothing is.

Justice Thomas also notes, as I did, that limiting common carriers’ right to exclude is a longstanding element of the American and British legal systems: “our legal system and its British predecessor have long subjected certain businesses, known as common carriers, to special regulations, including a general requirement to serve all comers.” To this, somewhat more perfunctorily Justice Thomas adds more modern public accommodation laws as a restriction on business’ ability to exclude. Common carriage is a narrower conception because it generally requires some market power on the part of the company, and for this reason I find it a superior basis for regulating social media companies. But regardless, this is hardly a radical proposal, and is in fact deeply embedded in law dating from a classical liberal–i.e., laissez faire–period.

Thomas notes that imposing such a restriction is up to the legislature. Alas, that’s not likely, especially given the influence the social media and tech companies have on the legislature, and more ominously, the clearly expressed interest of the party in power to use the social media and tech companies to exclude and censor speech by their political opponents–whom I daresay they consider political enemies, and indeed, beyond the pale and deserving of banishment from the public sphere.

The leftist party in power cannot restrict speech directly–that would violate the First Amendment. And this is where Twitter, Facebook, Google, Amazon etc. can be quite useful to the leftist party in power. As private entities, their exclusion of speech from their platforms does not facially violate 1A. So note with care the pressure that leftist legislators are putting on these companies to police speech even more than they do already. These members of the party in power are outsourcing censorship to ostensibly private entities as a way of circumventing the Constitution.

As their previous behavior indicates, moreover, these companies do not necessarily need much prompting. They are ideologically aligned with the party in power, and are implementing much politically-slanted censorship of their own volition.

This symbiosis between the private businesses and the governing party is the essence of the political-economic model of fascism. At times, the relationship looks like an Escher etching. Like this one in particular:

Which hand is the Democratic Party, and which one is Twitter et al? That is, is the Democratic Party driving social media companies, or are social media companies pulling the strings of the Democratic Party?

The answer is both–like in the Escher. And that is the essence of the political-economic model of fascism. Corporations are acting as political actors, and politicians and those in government are using corporations to advance their political agenda. This is true in any political system, but the symbiosis is far, far stronger in fascist ones, and the antagonisms far weaker than in more liberal polities.

And as we’ve seen in recent months, it’s not just social media and tech companies that are involved. Corporate America generally has adopted a leftist political agenda, is advancing this agenda, and is attempting to pressure governments–especially state governments–to do so as well.

The injection of companies like the major airlines–all of them–and Coca Cola into the Georgia (and now Texas) voting law controversies is the most recent example. But entertainment companies–including professional sports as well as Hollywood, music businesses, etc.–are also exerting substantial political muscle.

Corporatism–a strong symbiotic relationship between government and powerful economic entities, especially corporations–is the essence of fascist economic systems. That is exactly what “capitalism” in the United States is today.

In such a system, the public-private dichotomy does not exist, and libertarians/classical liberals who act as if it does are useful idiots for the corporatists.

This model is also a good characterization of the Chinese system, which although is ostensibly communist, has become clearly corporatist/fascist in the post-Deng era. Interestingly, the main struggle today in China is between the state/Party and large corporations that Xi and his minions believe are too powerful and hence too independent of the state. Even in symbiotic relationships, there is a struggle for power–and for control over the rents.

So while I applaud Justice Thomas for advocating legislation to impose common carrier status on tech behemoths, it must be acknowledged that this proposal is naive in the current environment. The mutual interest between the current party in power and corporate interests in advancing political agendas generally, and suppressing speech in particular (in part because it also helps advance those agendas), is so great that such legislation cannot come to pass today. It is doubtful that it would have come to pass even had Trump won reelection. The slide into corporatism/economic fascism has progressed too far to hold out little hope that it can be reversed, absent some social convulsion.

Print Friendly, PDF & Email

March 15, 2021

Deliver Me From Evil: Platts’ Brent Travails

Filed under: Commodities,Derivatives,Economics,Exchanges,Politics,Regulation — cpirrong @ 6:41 pm

In its decision to change speedily the Dated “Brent” crude oil assessment to include US crude and to a CIF basis, Platts hit a hornets’ nest with a stick and now is running away from the angry hive.

Platts’ attempt to change the contract makes sense. Dated “Brent” is an increasingly, well, dated benchmark due to the inexorable decline in North Sea production volumes, something I’ve written about periodically for the last 10 years or so. At present, only about one cargo per day is eligible, and this is insufficient to prevent squeezes (some of which have apparently occurred in recent months). The only real solution is to add more supply. But what supply?

Two realistic alternatives were on offer: to add oil from Norway’s Johan Sverdrup field, or to add non-North Sea oil (such as West African or US). Each presents difficulties. The Sverdrup field’s production is in the North Sea, but it is heavier and more sour than other oil currently in the eligible basket. West African or US oil is comparable in quality to the current Brent basket, but it is far from the North Sea.

Since derivatives prices converge to the cheapest-to-deliver, just adding either Sverdrup or US oil on a free on board basis to the basket would effectively turn Dated Brent into Dated Sverdrup or Dated US: Svedrup oil would be cheaper than other Brent-eligible production because of its lower quality, and US oil would be cheaper due to its greater distance from consumption locations. So to avoid creating a US oil or Sverdrup oil contract masquerading as a Brent contract, Platts needs to establish pricing differentials to put these on an even footing with legacy North Sea grades.

In the event, Platts decided to add US oil. In order to address the price differential issue, it decided to move the pricing basis from free on board (FOB) North Sea, to a cost, insurance, and freight (CIF) Rotterdam basis. It also announced that it would continue to assess Brent FOB, but this would be done on a netback basis by subtracting shipping costs from the CIF Rotterdam price.

The proposal makes good economic sense. And I surmise that’s exactly why it is so controversial.

This cynical assessment is based on a near decade of experience (from 1989 to 1997) in redesigning legacy futures contracts. From ’89-’91, in the aftermath of the Ferruzzi soybean corner, I researched and authored a report (published here–cheap! only one left in stock!) commissioned by the CBOT that recommended adding St. Louis as a corn and soybean delivery point at a premium to Chicago; in ’95-’96, in the aftermath of a corner of canola, I advised the Winnipeg Commodity Exchange about a redesign of its contract; in ’97, I was on the Grain Delivery Task Force at the CBOT which radically redesigned the corn and beans contracts–a design that remains in use today.

What did I learn from these experiences? Well, a WCE board member put it best: “Why would I want a more efficient contract? I make lots of money exploiting the inefficiencies in the contract we have.”

In more academic terms: rent seeking generates opposition to changes that make contracts more efficient, and in particular, more resistant to market power (squeezes, corners and the like).

Some anecdotes. In the first experience, many members of the committee assigned to consider contract changes–including the chairman (I can name names, but I won’t!)–were not pleased with my proposal to expand the “economic par” delivery playground beyond Chicago. During the meeting where I presented my results, the committee chairman and I literally almost came to blows–the reps from Cargill and ADM bodily removed the chairman from the room. (True!)

The GDTF was formed only because a previous committee formed to address the continued decline of the Chicago market was deadlocked on a solution. The CBOT had followed the tried-and-true method of getting all the big players into the room, but their interests were so opposed that they could not come to agreement. Eventually the committee proposed some Frankenstein’s monster that attempted to stitch together pieces from all of the proposals of the members, which nobody liked. (It was the classic example of a giraffe being a horse designed by committee.). It was not approved by the CBOT, and when the last Chicago delivery elevator closed shortly thereafter, the CFTC ordered the exchange to change the contract design, or risk losing its contract market designation.

Faced with this dire prospect, CBOT chairman Pat Arbor (a colorful figure!) decided to form a committee that included none of the major players like Cargill or ADM. Instead, it consisted of Bill Evans from Iowa Grain, Neal Kottke of Kottke Associates (an independent FCM), independent grain trader Tom Neal, and some outsider named Craig Pirrong. (They were clearly desperate.)

In relatively short order we hashed out a proposal for delivery on the Illinois River, at price differentials reflecting transportation costs, and a shipping certificate (as opposed to warehouse receipt) delivery instrument. After a few changes demanded by the CFTC (namely extending soybean delivery all the way down the River to St. Louis, rather than stopping at Peoria–or was it Pekin?), the design was approved by the CBOT membership and went into effect in 1998.

One thing that we did that caused a lot of problems–including in Congress, where the representative from Toledo (Marcy Kaptor) raised hell–was to drop Toledo as a delivery point. This made economic sense, but it did not go over well with certain entities on the shores of Lake Erie. Again–the distributive effects raised their ugly heads.

The change in the WCE contract–which was also eminently sensible (of course, since it was largely my idea!) also generated a lot of heat within the exchange, and politically within Alberta, Manitoba, and Saskatchewan.

So what did I learn? In exchange politics, as in politics politics, efficiency takes a back seat to distributive considerations. This insight inspired and informed a couple of academic papers.

I would bet dimes to donuts that’s exactly what is going on with Platts and Brent. Platts’ proposal for a more efficient pricing mechanism gores some very powerful interests’ oxen.

Indeed, the rents at stake in Brent are far larger than those even in CBOT corn and beans, let alone tiny canola. The Brent market is vastly bigger. The players are bigger–Shell or BP or Glencore make even 1997 era Cargill look like a piker. Crucially, open interest in Brent-based instruments extends out until 2029: open interest in the ags went out only a couple of years.

My surmise is that the addition of a big new source of deliverable supply (the US) would undercut the potential for delivery games exploiting “technical factors” as they are sometimes euphemistically called in the North Sea. This would tend to reduce the rents of those who have a comparative advantage in playing these games.

Moreover, adding more deliverable supply than people had anticipated would be available when they entered into contracts last year or the year before or the year before . . . and which extend out for years would tend to cause the prices for these longer dated contracts to fall. This would transfer wealth from the longs to the shorts, and there is no compensation mechanism. There would be big winners and losers from this.

It is these things that stirred up the hornets, I am almost sure. I don’t envy Platts, because Dated Brent clearly needs to be fixed, and fast (which no doubt is why Platts acted so precipitously). But any alternative that fixes the problems will redistribute rents and stir up the hornets again.

In 1997 the CBOT got off its keister because the CFTC ordered it to do so, and had the cudgel (revoking contract designation) to back up its demand. There’s no comparable agency with respect to Brent, and in any event, any such agency would be pitted against international behemoths, making it doubtful it could prevail.

As a result, I expect this to be an extended saga. Big incumbent players lose too much from a meaningful change, so change will be slow in coming, if it comes at all.

Print Friendly, PDF & Email

February 22, 2021

GameStop: Round Up the Usual Suspects

Filed under: Clearing,Derivatives,Economics,Politics,Regulation — cpirrong @ 7:52 pm

Shuttling between FUBARs, it’s back to GameStop!

Last week there were House hearings regarding the GameStock saga. As is usual with these things, they were more a melange of rampant narcissism and political posing and outright stupidity than a source of information. Everyone had an opportunity to identify and then flog their favorite villains and push their favorite “solutions.” All in all, very few constructive observations or remedies came out of the exercise. I’m sure you’re shocked.

Here are a few of the main issues that came up.

Shortening the securities settlement cycle. The proximate cause of Robinhood’s distress was a huge margin call. Market participants post margins to mitigate the credit risk inherent in a two day settlement cycle. Therefore, to reduce margins and big margin calls, let’s reduce the settlement cycle! Problem solved!

No, problem moved. Going to T+0 settlement would require buyers to stump up the cash and sellers to secure the stock on the same day of the transaction. Almost certainly, this wouldn’t result in a reduction of credit in the system, but just cause buyers to borrow money to meet their payment obligations. Presumably the lenders would not extend credit on an unsecured basis, but would require collateral with haircuts, where the haircuts will vary with risk: bigger haircuts would require the buyers to put up more of their own cash.

I would predict that to a first approximation the amount of credit risk and the amount of cash buyers would have to stump up would be pretty much the same as in the current system. That is, market participants would try to replicate the economic substance of the way the market works now, but use different contracting arrangements to obtain this result.

I note that when the payments system went to real time gross settlement to reduce the credit risk participants faced through the netting mechanism with daily settlement, central banks stepped in to offer credit to keep the system working.

It’s also interesting to note that what DTCC did with GameStop is essentially move to T+0 settlement by requiring buyers to post margin equal to the purchase price:

Robinhood made “optimistic assumptions,” Admati said, and on Jan. 28, Tenev woke up at 3:30 a.m. and faced a public crisis. With a demand from a clearinghouse to deposit money as a safety measure hedging against risky trades, he had to get $1 billion from investors. Normally, Robinhood only has to put up $2 for every $100 to vouch for their clients, but now, the whole $100 was required. Thus, trading had to be slowed down until the money could be collected.

That is, T+0 settlement is more liquidity/cash intensive. As a result, a movement to such a system would lead to different credit arrangements to provide the liquidity.

As always, you have to look at how market participants will respond to proposed changes. If you require them to pay cash sooner by changing the settlement cycle, you have to ask: where is the cash going to come from? The likely answer: the credit extended through the clearing system will be replaced with some other form of credit. And this form is not necessarily preferable to the current form.

Payment for order flow (“PFOF”). There is widespread suspicion of payment for order flow. Since Robinhood is a major seller of order flow, and since Citadel is a major buyer, there have been allegations that this practice is implicated in the fiasco:

Reddit users questioned whether Citadel used its power as the largest market maker in the U.S. equities market to pressure Robinhood to limit trading for the benefit of other hedge funds. The theory, which both Robinhood and Citadel criticized as a conspiracy, is that Citadel Securities gave deference to short sellers over retail investors to help short sellers stop the bleeding. The market maker also drew scrutiny because Citadel, the hedge fund, together with its partners, invested $2 billion into Melvin Capital Management, which had taken a short position in GameStop.

To summarize the argument, Citadel buys order flow from Robinhood, Citadel wanted to help out its hedge fund bros, something, something, something, so PFOF is to blame. Association masquerading as causation at its worst.

PFOF exists because when some types of customers are cheaper to service than others, competitive forces will lead to the design of contracting and pricing mechanisms under which the low cost customers pay lower prices than the high cost customers.

In stock trading, uninformed traders (and going out on a limb here, but I’m guessing many Robinhood clients are uninformed!) are cheaper to intermediate than better informed traders. Specifically, market makers incur lower adverse selection costs in dealing with the uninformed. PFOF effectively charges lower spreads for executing uninformed orders.

This makes order flow on lit exchange markets more “toxic” (i.e., it has a higher proportion of informed order flow because some of the uninformed flow has been siphoned off), so spreads on those markets go up.

And I think this is what really drives the hostility to PFOF. The smarter order flow that has to trade on lit markets doesn’t like the two tiered pricing structure. They would prefer order flow be forced onto lit markets (by restricting PFOF). This would cause the uninformed order flow to cross subsidize the more informed order flow.

The segmentation of order flow may make prices on lit markets less informative. Although the default response among finance academics is to argue that more informative is better, this is not generally correct. The social benefit of more accurate prices (e.g., does that lead to better investment decisions) have not been quantified. Moreover, informed trading (except perhaps, ironically, for true insider trading) involves the use of real resources (on research, and the like). Much of the profit of informed trading is a transfer from the uninformed, and to the extent it is, it is a form of rent seeking. So the social ills of less informative prices arising from the segmentation of order flow are not clearcut: less investment into information may actually be a social benefit.

There is a question of how much of the benefit of PFOF gets passed on to retail traders, and how much the broker pockets. Given the competitiveness of the brokerage market–especially due to the entry of the likes of Robinhood–it is likely a large portion gets passed on to the ultimate customer.

In sum, don’t pose as a defender of the little guy when attacking PFOF. They are the beneficiaries. Those attacking PFOF are actually doing the bidding of large sophisticated and likely better informed investors.

HFT. This one I really don’t get. There is HFT in the stock market. Something bad happened in the stock market. Therefore, HFT caused the bad thing to happen.

The Underpants Gnomes would be proud. I have not seen a remotely plausible causal chain linking HFT to Robinhood’s travails, or the sequence of events that led up to them.

But politicians gonna politician, so we can’t expect high order logical thinking. The disturbing thing is that the high order illogical thinking might actually result in policy changes.

Print Friendly, PDF & Email

February 21, 2021

Touching the Third Rail: The Dangers of Electricity Market Design

In the aftermath of the Texas Freeze-ageddon much ink and many pixels have been spilled about its causes. Much–most?–of the blame focuses on Texas’s allegedly laissez faire electricity market design.

I have been intensely involved (primarily in a litigation context) in the forensic analysis of previous extreme electricity market shocks, including the first major one (the Midwest prices spike of June 1998) and the California crisis. As an academic I have also written extensively about electricity pricing and electricity market design. Based on decades of study and close observation, I can say that electricity market design is one of the most complex subjects in economics, and that one should step extremely gingerly when speaking about the topic, especially as it relates to an event for which many facts remain to be established.

Why is electricity market design so difficult? Primarily because it requires structuring incentives that effect behavior over both very long horizons (many decades, because investments in generation and transmission are very long lived) and extremely short horizons (literally seconds, because the grid must balance at every instant in time). Moreover, there is an intimate connection between these extremely disparate horizons: the mechanisms designed to handle the real time operation of the system affect the incentives to invest for the long run, and the long run investments affect the operation of the system in real time.

Around the world many market designs have been implemented in the approximately 25 year history of electricity liberalization. All have been found wanting, in one way or another. They are like Tolstoy’s unhappy families: all are unhappy in their own way. This unhappiness is a reflection of the complexity of the problem.

Some were predictably wretched: California’s “reforms” in the 1990s being the best example. Some were reasonably designed, but had their flaws revealed in trying conditions that inevitably arise in complex systems that are always–always–subject to “normal accidents.”

From a 30,000 foot perspective, all liberalized market designs attempt to replace centralization of resource allocation decisions (as occurs in the traditional integrated regulated utility model) with allocation by price. The various systems differ primarily in what they leave to the price system, and which they do not.

As I wrote in a chapter in Andrew Kleit’s Energy Choices (published in 2006) the necessity of coordinating the operation of a network in real time almost certainly requires a “visible hand” at some level: transactions costs preclude the coordination via contract and prices of hundreds of disparate actors across an interconnected grid in real time under certain conditions, and such coordination is required to ensure the stability of that grid. Hence, a system operator–like ERCOT, or MISO, or PJM–must have residual rights of control to avoid failure of the grid. ERCOT exercised those residual rights by imposing blackouts. As bad as that was, the alternative would have been worse.

Beyond this core level of non-price allocation, however, the myriad of services (generation, transmission, consumption) and the myriad of potential conditions create a myriad of possible combinations of price and non-price allocation mechanisms. Look around the world, and you will see just how diverse those choices can be. And those actual choices are just a small subset of the possible choices.

As always with price driven allocation mechanisms, the key thing is getting the prices right. And due to the nature of electricity, this involves getting prices right at very high frequency (e.g., the next five minutes, the next hour, the next day) and at very low frequency (over years and decades). This is not easy. That is why electricity market design is devilish hard.

One crucial thing to recognize is that constraints on prices in some time frames can interfere with decisions made over other horizons. For example, most of the United States (outside the Southeast) operates under some system in which prices day ahead or real time are the primary mechanism for scheduling and dispatching generation over short horizons, but restrictions on these prices (e.g., price caps) mean that they do not always reflect the scarcity value of generating or transmission capacity. (Much of the rest of the world does this too.) As a result, these prices provide too little incentive to invest in capacity, and the right kinds of capacity. The kludge solution to this is to create a new market, a capacity market, in which regulators decide how much capacity of what type is needed, and mandate that load servers acquire the rights to such capacity through capacity auctions. The revenues from these auctions provide an additional incentive for generators to invest in the capacity they supply.

The alternative is a pure energy market, in which prices are allowed to reflect scarcity value–and in electricity markets, due to extremely inelastic demand and periodic extreme inelasticity of supply in the short run, that scarcity value can sometimes reach the $1000s of dollars.

Texas opted for the energy market model. However, other factors intervened to prevent prices from being right. In particular, heavy subsidies for renewables have systematically depressed prices, thereby undercutting the incentives to invest in thermal generation, and the right kind of thermal generation. This can lead to much bigger price spikes than would have occurred otherwise–especially when intermittent renewables output plunges.

Thus, a systematic downward price distortion can greatly exacerbate upward price spikes in a pure energy model. That, in a nutshell, is the reason for Texas’s recent (extreme) unhappiness.

As more information becomes available, it is clear that the initiator of the chain of events that left almost half the state in the dark for hours was a plunge in wind generation due to the freezing of wind turbines. Initially, combined cycle gas generation ramped up output dramatically to replace the lost wind output. But these resources could not sustain this effort because the cold-related disruptions in gas production, transmission, and distribution turned the gas generators into fuel limited resources. The generators hadn’t broken down, but couldn’t obtain the fuel necessary to operate.

It is certainly arguable that Texas should have recognized that the distortion in prices that arose from subsidization of wind (primarily at the federal level) that bore no relationship whatsoever to the social cost of carbon made it necessary to implement the kapacity market kludge, or some other counterbalance to the subsidy-driven wrong prices. It didn’t, and that will be the subject of intense debate for months and years to come.

It is essential to recognize however, that the underlying reason why a kludge may be necessary is that the price wasn’t right due to government intervention. When deciding how to change the system going forward, those interventions–and their elimination–should be front and center in the analysis and debate, rather than treated as sacrosanct.

There is also the issue of state contingent capacity. That is, the availability of certain kinds of capacity in certain states of the world. In electricity, the states of the world that matter are disproportionately weather-related. Usually in Texas you think of hot weather as being the state that matters, but obviously cold weather matters too.

It appears that the weatherization of power plants per se was less of an issue last week than the weatherization of fuel supplies upstream from the power plants. It is an interesting question regarding the authority of ERCOT–the operator of the Texas grid–extends to mandating the technology utilized by gas producers. My (superficial) understanding is that it is unlikely to, and that any attempt to do so would lead to a regulatory turf battle (with the Texas Railroad Commission, which regulates gas and oil wells in Texas, and maybe FERC).

There is also the question of whether in an energy only market generators would have the right incentive to secure fuel supplies from sources that are more immune to temperature shocks than Texas’s proved to be last week. Since such immunity does not come for free, generator contracts with fuel suppliers would require a price premium to obtain less weather-vulnerable supplies, and presumably a liability mechanism to penalize non-performance. The price premium is likely to be non-trivial. I have seen estimates that weatherizing Texas wells would cost on the order of $6-$9 million per well—which would double or more than the cost of a well. Further, it would be necessary to incur additional costs to protect pipelines and gas processing facilities.

In an energy only market, the ability to sell at high prices during supply shortfalls would provide the incentive to secure supplies that allow producing during extreme weather events. The question then becomes whether this benefit times the probability of an extreme event is larger or smaller than the (non-trivial) cost of weatherizing fuel supply.

We have a pretty good idea, based on last week’s events, of what the benefit is. We have a pretty good idea of the cost of hardening fuel supplies and generators. The most imprecise input to the calculation is the probability of such an extreme event.

Then the question of market design–and specifically, whether weatherization should be mandated by regulation or law, and what form that mandate should take–becomes whether generation operators or regulators can estimate that probability more accurately.

In full awareness of the knowledge problem, my priors are that multiple actors responding to profit incentives will do a better job than a single actor (a regulator) operating under low power incentives, and subject to political pressure (exerted by not just generators, but those producing, processing, and transporting gas, industrial consumers, consumer lobbyists, etc., etc., etc., as well). Put differently, as Hayek noted almost 75 years ago, the competitive process and the price system is a way of generating information and using it productively, and has proved far more effective in most circumstances than centralized planning.

I understand that this opinion will be met with considerable skepticism. But note a few things. For one, a regulator’s mistakes have systematic effects. Conversely, some private parties may overestimate the risk and others underestimate it: the composite signal is likely to be more accurate, and less vulnerable to the miscalculation of a single entity. For another, on the one hand skeptics excoriate a regulator for its failures–but confidently predict that some other future regulator will get it right. I’m the skeptic on that.

Recent events also raise another issue that could undermine reliance on the price system. Many very unfortunately people entered into contracts in which their electricity bills were tied to wholesale prices. As a result, the are facing bills for a few days of electricity running into the many thousands of dollars because wholesale prices spiked. This is indeed tragic for these people.

That spike by the way, is up to $10,000/MWh. $10/KWh. Orders of magnitude bigger than you usually pay.

It is clear that the individuals who entered these contracts did not understand the risks. And this is totally understandable: if you are going to argue that regulators or generators underplayed the risks, you can’t believe that they typical consumer won’t too. I am sure there will be lawsuits relating in particular to the adequacy of disclosure by the energy retailers who sold these contracts. But even if the fine print in the contracts disclosed the risks, many consumers may not have understood them even if they read it.

One of the difficulties with getting prices right in electricity markets which has plagued market design is getting consumers to see the price signals so that they can limit use when supply is scarce. But this will periodically involve paying stratospheric prices.

From a risk bearing perspective this is clearly inefficient. The risk should be transferred to the broader financial markets (though hedging mechanisms, for instance) because the risk can be diversified and pooled in those markets. But this is at odds with the efficient consumption perspective. This is not a circle that anyone has been able to square heretofore.

Moreover, the likely regulatory response to the extreme misfortune experienced by some consumers will be to restrict wholesale prices so that they do not reflect scarcity value. That is, an energy only market has a serious time consistency problem: regulators cannot credibly commit to allow prices to reflect scarcity value, come what may. This means that an energy only market may not be politically sustainable, regardless of its economic merits. I strongly suggest that this will happen in Texas.

In sum, as the title of the book I mentioned earlier indicates, electricity market design is about choices. Moreover, those choices are often of the pick-your-poison variety. This means that avoiding one kind of problem–like what Texas experienced–just opens the door to other problems. Evaluation of electricity market design should not over-focus on the most recent catastrophe while being blind to the potential catastrophes lurking in alternative designs. But I realize that’s not the way politics work, and this will be an intensely political process going forward. So we are likely to learn the wrong lessons, or grasp at “solutions” that pose their own dangers.

As a starting point, I would undo the most clearcut cause of wrong prices in Texas–subsidization of wind and other renewables. Alas, even if stopped tomorrow the baleful effect those subsidies will persist long into the future, because they have impacted decisions (investment decisions) on the long horizon I mentioned earlier. But other measures–such as mandated reserve margins and capacity markets, or hardening fuel supplies–will also only have effects over long horizons. For better or worse, and mainly worse, Texas will operate under the shadow of political decisions made long ago. And made primarily in DC, rather than Austin.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress