Streetwise Professor

October 22, 2020

VOLT Redux

Filed under: Clearing,Derivatives,Economics,Exchanges,Regulation — cpirrong @ 6:44 pm

The very first substantive post on this blog, almost 15 years ago, was about a failure of the electronic trading system at the Tokyo Stock Exchange.

Whoops, they did it again!

Apparently believing that misery loves company, Euronext has also experienced failures.

Euronext’s problems seem quite more frightening, because they involve the out-trade from hell: reversing the polarity on transactions:

“It has been identified that some of the 19/10 trades sent yesterday to the CCPs (central counterparty clearing house) had the wrong buy/sell direction”, Euronext said.

Thought you were long? Hahahahahaha. You’re short, sucker!

I hate it when that happens! (Yes, Euronext reversed the trades after it realized the problem.)

The lessons of my “Value of Lost Trade” (“VOLT”) piece still hold. It is inefficiently costly to drive the probability of a failure to zero. Whether exchanges have the efficient probability of failure (or really, the efficient vector of failure probabilities, because there are multiply types of failure) depends on the value of foregone trades when a system is down (or the cost of other types of errors, such as reversing trade direction).

Meaning that system failures will continue to occur, and long after this blog fades away.

Print Friendly, PDF & Email

“It’s Easy to Win an Argument With Milton, When He Isn’t There”

Filed under: CoronaCrisis,Economics,Politics,Regulation — cpirrong @ 6:30 pm

Raghuran Rajan is a smart guy who has done excellent and rigorus work, and is a gentleman to boot. But even such as he are capable of saying dodgy things, as in his comments to the FT on Friedman’s social responsibility article, in which he said the covid pandemic has exposed flaws in Friedman’s argument:

First, Covid-19 has threatened some companies with the extinction of shareholder value, subjecting businesses to a shock that, despite government intervention, has put their existence in question. “At this point,” Prof Rajan told me, “the best thing [a company with thin resources] could do is focus those resources on survival, because in surviving, it provides a decent job for its workers, it continues making that widget which people buy. It lives for the future.

Not all companies came into the crisis with thin resources. For the tech companies, nursing war chests replenished by tech-hungry consumers in lockdown, this should be a chance to go beyond bare Friedmanite requirements

Amazon, for instance, could “do more for its various suppliers, some of whom may be struggling small and medium business units”, said Prof Rajan. “It could find ways to provide them more credit to last through the pandemic that will get it more loyalty, because people will know it can be a source of insurance, rather than just a platform.”

. . .

This sort of action exposes the “missing part” of Friedman’s thesis, said Prof Rajan. He failed to recognise that “implicit equity stakes” — such as the commitment of a company to the partnership with its workers, suppliers or customers — are “as important, sometimes, as the explicit equity stake”

These things are missing how, exactly? Essentially Rajan is arguing that there are gains from trade to be realized to a corporation from adjusting explicit and implicit contractual terms with “stakeholders” such as workers, suppliers, and customers, in response to an economic shock like covid. But note: such adjustments would enhance the corporation’s profits, by allowing it to capture some of those gains from trade.

Indeed, according to the Friedman norm, such companies, acting as profit maximizers, would benefit not just themselves, but their workers, suppliers, and customers. Thus, rather than being some lacuna in Friedman’s framework, what Rajan emphasizes is precisely why profit maximization in the price system should be encouraged, as Friedman did. It provides an incentive for corporations to engage in mutually beneficial transactions, regardless of the underlying circumstances. That is, profit maximization guides optimal responses to circumstances, even crappy circumstances. Nay, especially crappy circumstances.

Or perhaps I should say “in the contractual system.” For what is involved here is negotiating contracts that maximize joint surplus. As Coase tells us, absent transactions costs, firms and their counterparties will do just that, and profit maximization (or utility maximization by workers, say) is exactly the engine that powers that result.

So the only way to make this critique coherent is to argue that transactions costs could somehow be reduced by reshuffling organizational forms or control rights. This Rajan does not do. Nor has anyone who burps up the term “stakeholders” and proclaims “QED!” Not that I have seen anyways.

As I said in my earlier post: if you are so smart, why aren’t you rich? Why haven’t you–or anyone else–come up with an alternative organizational form that allows the creation and capture of gains from trade that corporations leave on the table?

Indeed, the most coherent restatement of the “stakeholder” argument is that corporations have failed because they aren’t maximizing profits because they are failing to structure transactions with stakeholders that exhaust all gains from trade.

I’m tempted to cut Raghuran some slack because his remarks are impromptu statements made to reporter, rather than in an academic article–or even a blog post. But the fact that something in an FT article is far more likely to resonate than a weighty academic tome or even a not-so-weighty academic blog post arguably cuts the other way: one should be on particular guard against expressing flabby thoughts, when said thoughts may be read by millions–and hence mislead millions. And, to be honest, Raghuran’s thoughts about the errors of Friedman’s thought during times of pandemic are very flabby indeed.

In reading all these critiques of Friedman, 50 years on, I’m reminded of something George Stigler said. “It’s easy to win an argument against Milton when he isn’t there.”

Print Friendly, PDF & Email

October 11, 2020

Facebook (and Twitter) Delenda Est

Filed under: Economics,Politics,Regulation — cpirrong @ 5:42 pm

I sent to a friend an article describing how WHO–yes, that WHO–is telling governments not to utilize lockdowns as their primary means of combatting Covid-19. I sent it via Facebook Messenger, because my friend lives in a rural area and that is often the only reliable way of transmitting messages: text and email often don’t work. The friend replied that the link didn’t work. I sent it via email, and it did work. Said friend then tried to post it to FB–but FB refused to post it.

So obviously, FB is censoring this information: it is a non-link as far as FB is concerned, consigned to the memory hole.

I’m so old that I remember when Facebook (and Twitter) censored articles that contradicted WHO. Now Facebook is censoring articles that contradict Facebook. Specifically, Facebook’s smelly pro-lockdown orthodoxy–even when that contradiction comes from WHO.

Facebook obviously loves lockdowns, and is going to do its damndest to prevent you from learning anything that might contradict that position.

This episode–and myriad others over the past couple of years–demonstrate that social media as it exists, and specifically as embodied by Facebook and Twitter, need to be subjected to common carrier non-discrimination regulations along the lines of what I advocated over 3.5 years ago.

There are the simplistic minded who claim that since these are private corporations, they should not be regulated. Thereby totally ignoring what I point out in my 2017 post: even in the halcyon days of classical liberalism, market power was understood to provide, under some circumstances, an exception to the general rule that private entities should be permitted to operate without restriction from government.

Some non-simplistic people–notably Richard Epstein, whose writings triggered my idea of applying common carrier regulation to social media–argue that the conditions for the exception do not hold. Even if Facebook and Twitter (and Google/YouTube, etc.) have dominant positions now, those positions are contestable. History suggests that market dominance is ephemeral, and a company that abuses its dominance will be displaced. More broadly, Schumpeterian creative destruction will, before long, consign current social media behemoths to the ash heap of history.

But how long “before long” is matters. It could be that in the long run, we are not just dead, but unfree. Or at least have suffered a grievous blow to our liberties, lost election by election.

In my opinion, Epstein underestimates the enduring impacts of network effects. I have studied exchanges–a classic beneficiary of network effects–for decades. I know how resilient they can be. Maybe Facebook (and Twitter, and Google/YouTube) will indeed be supplanted in 10 years. Hell, even 5. Hell, even 23 days.

What damage can they do in the meantime?

The suppression of information and opinion for days, let alone months or years, can have devastating effects. When the stakes in elections are so high, the distortion of the exchange of ideas and information that result from Facebook’s and Twitter’s and Google/YouTube’s censorship have very real consequences, even if someday, somehow, they will become historical curiosities.

Let’s just do some basic cost-benefit analysis. Lockdowns have caused the losses of trillions of dollars (and euros and yen and rubles and lira and what have you) of economic loss. Actions (such as Facebook’s censorship) that increase the likelihood of re-imposition of these lockdowns by even a small percentage can cause tens of billions, and perhaps trillions in economic harm. (I recall Ronald Coase’s statement that an economist can pay for his lifetime salary by delaying the imposition of a bad regulation by even a day.)

What is the cost of requiring social media platforms to operate on a principle of non-discrimination, and therefore allow supposedly sentient beings to sift through competing claims, rather than substituting their own judgments? Judgments, I might add, that are hardly disinterested. Do you think for a moment that Facebook and the other social media giants have not benefited from having people stuck at home, with little to do?

This issue also speaks to my post from a few hours ago. Yes, Zuckerberg (and other decision makers at Facebook) and Jack (Chase the Dragon) Dorsey and Sundar Pachai are arguably enhancing profits through their censorship policies (by creating a bored group of consumers with too much time on their hands), but they are also indulging their own personal preferences: to the extent that the latter is true, they are violating Friedman’s injunction. Moreover, since they have largely made the state their creatures, they can enhance their power (and wealth) by exercising huge influence over the transmission of information, and hence over public debate, and do so in a way that enhances their power, profits, and the achievement of their ideological goals. (Unpacking all these things is not easy.)

Meaning that actual policy and regulation are likely to deviate grotesquely from any “public interest” standard. Public interest would dictate, at the very least, subjecting Facebook et al to very limited restrictions, such as non-discrimination requirements. Requirements that they operate as open platforms (which could benefit from network effects, btw) and not discriminate or censor on the basis of viewpoint. But for myriad reasons, these social media entities view such restrictions as an anathema, and political economy therefore suggest that such restrictions will never be imposed.

Which makes it tragic, to say the least, that those who claim to advocate liberty shrink from constraining its most deadly enemies.

Print Friendly, PDF & Email

Milton Friedman vs. 21st Century Lilliputians on Corporate Responsibility

Filed under: Economics,Exchanges,Politics,Regulation — cpirrong @ 3:57 pm

This year marks the 50th anniversary of Milton Friedman’s article on the “social responsibility of business,” in which he argued that business has one responsibility: to maximize profit. The anniversary has unleashed numerous retrospectives, most of them negative, and most of the negative treatments being given by people who have to crane their necks to see the soles of Friedman’s intellectual shoes.

The criticisms can be grouped into two basic categories: (a) the need for “stakeholder capitalism” as opposed to shareholder capitalism, and (b) the need for corporations to work to achieve social goals, such as environmental objectives or racial justice.

Both criticisms are unavailing and unpersuasive. The stakeholder capitalism critique founders on the is-ought fallacy. The social goals criticism founders on the Knowledge Problem.

Stakeholder capitalism advocates elide the “is” and jump right to “ought.” This is a fatal intellectual error, well described by Chesterton’s Fence.

Put differently, before advocating stakeholder capitalism as a superior substitute to shareholder capitalism, it is wise to ask: if stakeholder capitalism is so great, why doesn’t it already exist?

After all, there are numerous alternative ways of organizing and governing the cooperation between and coordination of suppliers of inputs (one subset of “the stakeholders”) to produce output that is sold to consumers (another subset of “the stakeholders”). There are many ways of allocating control rights and cash-flow rights. The corporate form, which makes the shareholders the residual claimants with residual control rights is just one. You can have sole proprieterships, partnerships, worker cooperatives, consumer cooperatives, mutual companies, and even anarcho-syndicalist worker communes:

Yet the corporate form that Friedman focuses on dominated then, and dominates today. It evidently conforms to the “survivorship principle” (a concept elucidated by Friedman’s partner in crime, George Stigler). That is, its dominance is consistent with its efficiency–its maximizing the size of the pie. (I recall a quote, which I thought was attributable to Bertrand Russell but which I cannot track down: “Efficiency is the highest form of altruism.”)

Moreover, this increasing the size of the pie effect must outweigh any distributive inequities “inherent in the system”: its survival means than no coalition of “stakeholders” (e.g., workers, or workers and customers, or workers and suppliers of capital) can make themselves better off by setting up an organization with different control and cash-flow rights than the shareholder corporation. Maybe the distribution of benefits within a corporation is inequitable, according to some theory of justice, but efficiency apparently trumps equity.

Henry Hansmann’s excellent book “The Organization of Enterprise” examines various alternative organizational forms, and finds that the efficiency of different forms of organization (e.g., producer cooperative, mutual) depends on the fine details of the nature of the production and marketing processes, and in particular the effects of these on the costs of contacting. My paper on the organization of financial exchanges provides a very interesting example. Exchanges organized as non-profit mutuals (pretty close to what Dennis advocated) were efficient under one set of technological conditions (floor trading) but not another (electronic trading): when technology changed, organization changed. Almost immediately. (Cf., the wave of exchange demutualizations in the early-2000s.)

Put differently, if “stakeholder capitalism” (or anarcho-syndicalist communes) were so great, either on efficiency or distributive grounds, we would see it in an even moderately competitive environment. Its absence makes it clear that it ain’t so great.

Sorry, Dennis. With the exception of the exchange thing. For a while, anyways.

So what about broader “social” goals? In this regard, it’s well to remember Hayek’s injunction that the addition of “social” as a prefix to any concept, e.g., social justice, usually renders the concept meaningless, or at best confuses rather than clarifies. Moreover, it’s imperative to remember another Hayekian concept: the Knowledge Problem.

The very existence of costs (e.g., pollution) that are not amenable to contract, and hence supposedly require unilateral corporate action to address, demonstrates the enduring legacy of one of Friedman’s colleagues, Coase. The transactions costs of some corporate activities are clearly too high to mitigate efficiently via contract. So, apparently, CEO’s are supposed to take an Olympian perspective and address these problems unilaterally.

That’s where the Knowledge Problem kicks in. Pray tell: where are CEOs supposed to get the information to lead them to make the appropriate trade-offs? The virtue of contract is that it provides a means of generating information about costs and benefits in order to make the altruistic–i.e., efficient–choice. But with “externalities,” contracting is a prohibitively expensive means of acquiring this information, and acting on it in an efficient eay. So where does this information come from?

CEOs–and heaven forfend, their HR departments–are usually sufficiently arrogant to believe they know.

They’re wrong: pride goeth before the fall.

Meaning that corporate decisions made pursuant to environmental or “social justice” goals are certain to be wrong. Very wrong.

Funny, isn’t it, that those who fault corporations for decisions that affect those with whom they have contractual privity blithely assert that they should make decisions with those with whom they do not? This is intellectual incoherence of the highest order.

There is another allegedly anti-Friedman argument, raised by the likes of the FT’s Martin Wolf: Friedman argued that corporations should maximize profits subject to the rules of the game, but since corporations make the rules of the game, this argument has no force and indeed cuts the other direction.

For one thing, Wolf’s argument is superficial and conclusory: “I also increasingly realize that I have changed my mind because I no longer believe in the contractarian view of the firm: that it is merely an aggregate of voluntary contracts which reflect the freedom of individuals to choose.” OK, Marty, I guess you are such an Olympian figure that your opinion, unsupported by argument or evidence, should suffice your disregarding the contractarian perspective and proceeding to other considerations.

But more importantly, one of Wolf’s more substantive arguments has, well, more substance: corporations have undue influence over the the political system, and therefore exert influence that results in the adoption of inefficient, and arguable inequitable, policies.

Well, yes. And Friedman would agree. Wolf’s criticism (and those of others making a similar point) of one Friedman article focused on one particular issue overlooks altogether other major–and indeed primary–streams of Friedman’s thought.

The precise reason that Friedman (and Stigler) opposed regulation and advocated small government was precisely because governments almost always advance special interests at the expense of efficiency and equity. Friedman always–always–asserted (justifiably) that he was NOT pro-big business. He was pro-market (which makes it particularly perverse that the Pro Market blog–which clearly steals from Friedman–repeatedly distributes garbage that traduces Friedman and others of his ilk, such as Aaron Director).

In this, Friedman was merely echoing Adam Smith–who never had a kind word to say about businessmen (a point that Stigler, the eminent Smith scholar of his era, made repeatedly).

Meaning that if your problem (and yeah, I’m looking at you Marty) is with undue corporate influence, rather than reshaping corporations in some way, maybe you should see that they are just responding rationally to the incentives inherent in a political system that gives the government almost unlimited authority to create rules that distribute rents.

That is, don’t limit corporations, limit governments. Corporations are just maze-bright rats. If you don’t want them gaming the maze, take away the cheese.

So to tar Friedman (and by extension other old-school Chicago types) with the brush of enabling corporations to write the rules of the game in their favor, is to ignore a major element of Friedman’s (and other old-school Chicago types’) worldview.

I’m also at a loss to figure out what particular changes to corporate organization and governance will miraculously transform them into more broad-minded entities that eschew exploiting the political system for their benefit. Look at Germany, or Japan, which have more “inclusive” models of governance, and which include other “stakeholders” in the formal governance process. Do you think they don’t influence the government to advance their interests? As. Fucking. If.

And if your response is: “give governments more power,” you are totally hopeless. Due to their comparative advantage in exercising influence over government–something that Wolf et al, in agreement with Friedman believe–that will just give corporations more power to do harm, not less.

In sum, Friedman’s latter-day critics, conveniently arguing when he is in the grave, and therefore unable to demolish them (as he surely would), totally fail to come to grips with his arguments, and in particular the arguments of his entire body of work, not just one article. “Stakeholder capitalism” is a vapid, vaporous concept that fails to address Deirdre McCloskey’s pithy phrase: “if you’re so smart, why aren’t you rich?” Claims that corporations should adopt a new objective function that encompasses “social” objectives, and not just profit, founder on the Knowledge Problem. Defensible criticisms that corporations exploit the political system are not arguing against Friedman–they are agreeing with him, yet arriving at wrongheaded conclusions.

We should all wish that our current thoughts, when evaluated from the perspective of 50 years, hold up so well as Milton Friedman’s. I guarantee that that will not be said of anyone carping on him today.

Friedman was small in stature, but a giant Gulliver in thought. His Lilliputian critics today prove the point.

Print Friendly, PDF & Email

September 26, 2020

Water, Water, Not Everywhere and Still Not a Drop to Drink, Or, The Very Natural State

Filed under: Climate Change,Derivatives,Economics,Exchanges,Politics,Regulation — cpirrong @ 2:53 pm

The WSJ reports that the CME Group is launching a cash-settled futures contract on California water, with Nasdaq providing the cash price index. I predict, with a high degree of confidence, that this will not be a commercial success. That is, it will not generate substantial trading volume.

Why not? For the same reason that listed weather derivatives hardly ever trade. Information flow is a necessary (but not sufficient) condition to make people want to trade. For weather derivatives, there is very little information flow until shortly prior to the pricing month. For example, what information arrives between today and tomorrow that leads to updates in forecasts about what the weather in Chicago will be in December 2020, let alone December 2021? Virtually none. Given the nature of weather dynamics, information flow occurs almost exclusively quite close to the contract date (e.g., in late-November 2020 or 2021, if not in December itself). There is little information that arrives today that would motivate people to trade today contracts with payoffs contingent on future weather, even for a future only months away.

So they don’t.

I predict a similar phenomenon for water derivatives. Most of the fundamental shocks are weather-driven, and those will be concentrated close to the pricing month, leading to little demand to trade prior thereto.

Moreover, successful futures contracts rest on functional physical markets. As this recent article from The American Spectator summarizes, it is a travesty to characterize the means of allocating water in California as “a market.” Instead, it is an intensely politicized process.

If you don’t consider the AmSpec reliable, do a little digging into the scholarly literature about water allocation in the West, notably things written by my friend Gary Libecap. The conclusions are depressingly similar.

The politicization of water allocation is not new. It has existed since the beginning not just in California, but the West generally. Control of water confers enormous political power. You think politicians are going to give that up?

Again, this is not a new thing. Read up on the “California Water Wars.” Or, for a more entertaining take, watch Chinatown, which is a fictionalization/mythologization of the conflict of visions between William Mulholland and Frederick Eaton over water in Los Angeles. Spoiler: the romantic vision died (literally drowned), and the corrupt vision prevailed.

California politicians will become charismatic Catholics before they give up control over water. In a way, it reminds me of the effect of sanctions in say Saddam’s Iraq. Restrictions on supply resulting from sanctions empowered the regime. It could use its power to grant access to a vital resource in order to obtain obeisance. Similarly, California politicians can use their power to grant access to the vital resource of water to obtain political support, and exercise political power.

In a way, this is the quintessence of something I used to write about in regards to Russia: “the natural state.” Here, the analogy is even more trenchant, given that it relates to a natural resource.

The natural state operates by creating artificial scarcity, which in turn creates rents. The natural state allocates those rents in exchange for political patronage.

To do things that would undermine the rents–that is, to alleviate the scarcity–would undermine political power. That will NOT happen voluntarily. Markets for water would be a good thing–which is precisely why they don’t exist, and are unlikely to exist, especially in places like California where water is scarce and hence real markets would be most beneficial.

So CME/Nasdaq California water futures face two huge obstacles. First, even if even a simulacrum of a cash market for water existed, the nature of information flows is not conducive to active trading of water futures. Second, there is not even a simulacrum of a water market in California. What exists in place of a market is a political, and highly politicized, mechanism. That is also inimical to building a successful futures contract on top of it.

PS. Riffing of the Rime of the Ancient Mariner in the title provides an opportunity for another Python reference!


Print Friendly, PDF & Email

August 18, 2020

California: Boom, Boom, Out Go the Lights

Filed under: Climate Change,Economics,Energy,Houston,Politics,Regulation — cpirrong @ 6:44 pm

Twenty years ago, California experienced its Electricity Crisis. Or, given current events (which will be the subject of what follows), may be known as the First Electricity Crisis. The problem in 2000-2001 was, in the main, a problem of insufficient generation, caused by a variety of factors. The ramifications of the supply shortage and resulting high prices for California utilities, ratepayers, and state finances were greatly exacerbated by a dysfunctional market design implemented only a few years before, in the mid-1990s. (When I gave talks about the subject, I used to quip: “California wanted to deregulate its power markets in the worst way. And it succeeded!”)

The lore of the crisis is that it was caused by Enron and other Houston bandits and their manipulative schemes. These schemes were not the cause of the crisis: they were the effect, and the effect of the dysfunctional market design, which created massive arbitrage opportunities which will always be exploited.

California is experiencing another crisis. It cannot yet rival the first, which went on week after week, whereas the current one has lasted about a week. But for the first time since Crisis I, the state is experiencing rolling blackouts due to a shortage in generating capacity.

The proximate cause of the problem is a massive heatwave which is causing high demand. A contributing proximate cause is low hydroelectric supply driven by a lower than average snowpack. But the underlying cause–and the cause that should get the attention of most Americans, including those who experience schadenfreude at the Insufferable State’s misery–is the Green Mania that has taken root in California which has made it impossible for the state to respond to demand spikes in the way power systems have done around the world for nigh onto a century.

In particular, California has adopted policies intended to increase substantially the share of power generated by renewables. This has indeed resulted in massive investments in renewables, especially solar power, which alone now accounts for around 12,338 MW.

But this capacity number is deceiving, because unlike a nuclear or coal or combined cycle natural gas plant, this is not available 24/7. It’s available, wouldn’t you know, when the sun shines. Thus, during the mid-morning to late afternoon hours, this capacity is heavily utilized, but during the evening, night, and early morning contributes nothing to generation. At those times, California draws upon the old reliables.

But that creates two problems, a short term one (which California is experiencing now) and a long term one (which contributed to the current situation and will make recurrences a near certainty).

The short term problem is that during hot weather, demand does not set with the sun. Indeed, as this chart from the California Independent System Operator shows, today (as on prior days) demand has continued to grow while solar generation ebbs. This figure illustrates “net demand” which is total demand net of renewables generation. Notice the large and steady increase in net demand during the late afternoon hours. This reflects a rise in consumption and not matched by a rise in solar generation before 1400, and a fall thereafter.

Go figure, right? Who knew that the hottest time of day wasn’t when the sun is at its height, or that people tend to come home (and crank up the AC) when the sun is going down?

Here’s the plot of renewables generation:

Note the plateau from around 1000-1400, and the decline from 1400 onwards–during which time load increased by about 10,000 MW.

So gas, nuclear, and (heaven forfend!) coal have to fill the growing gap between load and non-dispatchable renewable generation. They have to supply the net demand. Which brings us to the longer term problem.

The growth in solar generation means that conventional and nuclear plants aren’t generating much power, and prices are low, during the hours when solar generation is large. Thus, these plants earn relatively little revenue (and may even operate at negative margins) during these hours. This deterioration in the economics of operating conventional plants, combined with regulatory and political disdain for nuclear and coal has led to the exit of substantial capacity in California. A large nuke plant shut down in 2015, all 10 coal plants in the state have shut down (though three have converted to the environmental disaster that is biomass), as have many gas plants. In 2018 alone, there was a net loss of around 1500 MW of gas capacity, and from 2013 the net loss is about 5000 MW–over 10 percent of the 2013 level. (NB: the shortfall in capacity the last few days has been around 5000MW. Just sayin’.)

And note–demand has been rising over this period.

Notionally, the loss in nuclear and conventional capacity has been roughly matched by the increase in solar capacity. But again–that solar capacity is not available under conditions like the state has experienced over recent days, with hot weather contributing to high and rising demand in the late afternoon when solar output is declining. That is, these forms of capacity are very imperfect substitutes. They are most imperfect in the afternoons on very hot days. Like the last week.

In a nutshell, at the same time it massively incentivized investment in renewables, California has not incentivized the necessary investment in (or retention of capacity in) conventional generation. That mismatch in incentives, and the behavior that results from those incentives, means that from time to time California will have inadequate generation. That is, California has not incentivized the proper mix of generation.

So how do you incentivize the retention of/investment in conventional capacity that will remain idle or highly underutilized most of the time, in order to accommodate the desire to increase renewables generation? There are basically two ways.

The first way is to have really, really high prices during times like this. Generators will make little money (or lose money) most of the time, and pay for themselves by making YUGE amounts of money during a few days or hours. This is the theory behind “energy only” markets (like ERCOT).

The problem is that it is not credible for regulators to commit to allowing stratospheric prices occur. There will be screams of price gouging, monopoly, etc., and massive political pressures to claw back the high revenues. This happened after Crisis I, as more than a decade of litigation, and the payment of billions by generators, shows. Once burned, twice shy: generators will be leery indeed about relying on government promises. (A David Allan Coe song comes to mind, but I’ll leave that to your imagination, memory, or Googling skills.)

Relatedly, who pays the high prices? Having retail customers see the actual price creates some operational problems, but the main problem is again political. So the high prices have to be recovered through regulated retail pricing mechanisms that give rise to the credible commitment problem: how can generators be sure that regulators will actually permit them to reap the high prices during tight times that are necessary to make it worthwhile to maintain the capacity?

That is, for a variety of reasons energy only pricing faces a time consistency problem, and as a result there will be underinvestment in generation, especially when renewables are heavily supported/subsidized, thereby reducing the number of hours that generators can pay for themselves.

The other way is the Klassic Kludge: Kapacity markets. Regulators attempt to forecast into the future how much capacity will be needed, and mandate investment in that amount of capacity. Those with load serving obligations must pay to buy the capacity, usually through an auction mechanism. The idea being that the market clearing price in this market will incentivize investment in the capacity level mandated by the regulators.

A Kalifornia Kapacity Kludge was proposed a few years back, but the Federal Energy Regulatory Commission shot it down.

All meaning that California leapt headlong into the Brave New Green World without the market mechanisms (either relatively pure, like an energy only market with unfettered prices, or a kludge like a capacity market) necessary to bridge the gap between demand and renewables supply.

So what happens? This happens:

California’s political dysfunction makes it a near certainty that it will not implement reasonable market solutions that will provide the right incentives, even conditional on its support for renewables. Indeed, it is almost certain that it will do something that will make things worse.

Milton Friedman once said that inflation is always and everywhere a monetary phenomenon. Given that the major power crises in recent years–in California, in Australia, and a near miss in Texas last year–have involved renewables in one way or another, I have an analog to Friedman’s statement: in the future, always and everywhere power crises will be a renewables phenomenon.

And this is why Americans should pay heed. Whatever ventriloquist has his hand up the back of Biden’s shirt has him promising a massive transition towards renewable electricity generation, beyond the already swollen levels (swollen by years and billions of subsidies). A vision, which realized, would result in California’ s problems being all of our problem.

So look at California like Scrooge did the Ghost of Christmas Future. And be afraid. Be very afraid.

Print Friendly, PDF & Email

May 20, 2020

Whoops! WTI Didn’t Do It Again, or, Lightning Strikes Once

The June 2020 WTI contract expired with a whimper rather than a bang yesterday, thereby not repeating the cluster of the May contract expiry. In contrast to the back-to-back 40 standard deviation moves in April, June prices exhibited little volatility Monday or Tuesday. Moreover, calendar spreads were in a modest contango–in contrast to the galactangos experienced in April, and prices never got within miles of negative territory.

Stronger fundamentals certainly played a role in this uneventful expiry. Glimmers of rebounding demand, and sharp supply reductions, both in the US and internationally, caused a substantial rally in flat prices and tightening of spreads in the first weeks of May. This alleviated fears about exhaustion of storage capacity. Indeed, the last EIA storage number for Cushing showed a draw, and today’s API number suggests an even bigger draw this week. (Though I must say I am skeptical about the forecast power of API numbers.). Also, the number of crude carriers chartered for storage has dropped. (H/T my daughter’s market commentary from yesterday). So the dire fundamental conditions that set the stage for that storm of negativity were not nearly so dire this week.

But remember that fundamentals only set the stage. As I pointed out in my posts in the immediate aftermath of the April chaos, technical factors related to the liquidation of the May contract, arguably manipulative in nature, the ultimate cause of the huge price drop on the penultimate trading day, and the almost equally large rebound on the expiry day.

The CFTC read the riot act in a letter to exchanges, clearinghouses, and FCMs last week. No doubt the CME, despite it’s Frank Drebin-like “move on, nothing to see here” response to the May expiry monitored the June expiration closely, and put a lot of pressure on those with open short positions to bid the market aggressively (e.g., bid at reasonable differentials to Brent futures and cash market prices). A combination of that pressure, plus the self-protective measures of market participants who didn’t want to get caught in another catastrophe, clearly led to earlier liquidations: open interest going into the last couple of days was well below the level at a comparable date in the May.

So fundamentals, plus everyone being on their best behavior, prevented a recurrence of the May fiasco.

It should be noted that as bad as April 20 was (and April 21, too), the carnage was not contained to those days, and the May contract alone. The negative price shock, and its potentially disastrous consequences for “fully collateralized” long-only funds, like the USO, led to a substantial early rolls of long positions in the June during the last days of April. Given the already thin liquidity in the market, these rolls caused big movements in calendar spreads–movements that have been completely reversed. On 27 April, the MN0 spread was -$14.45: it went off the board at a 54 cent backwardation. Yes, fundamentals were a major driver of that tightening, but the early roll in the US (and some other funds) triggered by the May expiration clearly exacerbated the contango. Collateral damage, as it were.

What is the takeaway from all this? Well, I think the major takeaway is not to overgeneralize from what happened on 20-21 April. The underlying fundamentals were truly exceptional (unprecedented, really)–and hopefully the likelihood of a repeat of those is vanishingly small. Moreover, the CME should be on alert for any future liquidation-related game playing, and market players will no doubt be more cautious in their approach to expiration. It would definitely be overlearning from the episode to draw expansive conclusions about the overall viability of the WTI contract, or its basic delivery mechanism.

That mechanism is supported by abundant physical supplies and connections to diverse production and consumption regions. Indeed, this was a situation where the problem was extremely abundant supply–which is an extreme rarity in physical commodity futures markets. Other contracts (Brent in particular) have chronic problems with inadequate and declining supply. As for WTI being “landlocked,” er, there are pipelines connecting Cushing to the Gulf, and WTI from Cushing has been exported around the world in recent years. With the marginal barrel going for export, seaborne crude prices drive WTI. With a better-monitored and managed liquidation process, especially in extraordinary circumstances, the WTI delivery mechanism is pretty good. And I say that as someone who has studied delivery mechanisms for around 30 years, and has designed or consulted on the design of these contracts.

Print Friendly, PDF & Email

May 14, 2020

Strange New Respect

Filed under: Climate Change,CoronaCrisis,Economics,Energy,Politics,Regulation,Tesla — cpirrong @ 5:50 pm

The past few weeks have brought pleasant surprises from people whom I usually disagree with and/or dislike.

For one, Michael Moore, the executive producer of Planet of the Humans. Moore does not appear on camera: that falls to Jeff Gibbs and (producer) Ozzie Zehner. The main virtue of the film is its evisceration of “green energy,” including wind and solar. It notes repeatedly that the unreliability of these sources of power makes them dependent on fossil fuel generation, and in some cases results in the consumption of more fossil fuels than would be the case if the renewables did not exist at all. Further, it points out-vividly-the dirty processes involved with creating wind and solar, most notably mining. The issues of disposing of derelict wind and solar facilities are touched on too, though that could have been beefed up some.

If you know about wind and solar, these things are hardly news to you. But for environmentalists to acknowledge that reality, and criticize green icons for perpetrating frauds in promoting these wildly inefficient forms of energy, is news.

The most important part of the film is its brutal look at biomass. It makes two points. First, that although green power advocates usually talk about wind and solar, much of the actual “renewable” energy is produced by biomass, e.g., burning woodchips. In other words, it exposes the bait-and-switch huckersterism behind a lot of green energy promotion. You thought you were getting windmills? Sucker: you’re getting plants that burn down forests. You fucked up! You trusted us!

Second, that biomass is hardly renewable (hence the quote marks above), and results in huge environmental damage. Yes, trees can regrow, but not as fast as biomass plants burn them. Moreover, the destruction of forests is truly devastating to wildlife and to irreplaceable habitats, and to the ostensible purpose of renewables–reduction of CO2.

The film also points out the massive corporate involvement in green energy, and this represents its weakest point. Corporations, like bank robbers, go where the money is. But that begs the question: Why is there money in horribly inefficient renewables? Answer: Because of government subsidies.

Alas, the movie only touches briefly on this reality. Perhaps that is a bridge too far for socialists like Moore. But he (and Gibbs and Zehner) really want to stop what they rightly view as the environmental and economic folly of renewables, they have to turn off the money tap. That requires attacking the government-corporate-environmentalist iron triangle on all three sides, not just two.

I am not a believer in the underlying premise of the movie, viz., that there are too many people consuming too much stuff, and if we don’t reduce people and how much they consume, the planet will collapse. That’s a dubious neo-Malthusian mindset. But put that aside. It’s a great thing that even hard core environmentalists call bull on the monstrosity that is green/renewable energy, and point out the hypocrisy and fundamental dishonesty of those who hype it.

My second candidate is long-time target Elon Musk. He has come out as a vocal opponent to lockdowns, and a vocal advocate for liberty.

Now I know that Elon is talking his book. Especially with competitors starting up their plants in the Midwest, the lockdown in California that has idled Musk’s Fremont manufacturing facility is costing Tesla money. But whatever. The point is that he is forcefully pointing out the huge economic costs of lockdowns, and their immense detrimental impact on personal liberty earns him some newfound respect, strange or otherwise.

Lastly, Angela Merkel. She has taken a much more balanced approach to Covid-19 than most other national leaders. Perhaps most importantly, she has clearly been trying to navigate the tradeoff between health, economic well-being, and liberty. Rather than moving the goalposts when previous criteria for evaluating lockdowns had been met, when it became clear that the epidemic was not as severe in Germany as had been feared, and that the economic consequences were huge, and that children were neither potential sufferers or spreaders, she pivoted to reopening quickly and pretty rationally.

The same cannot be said in other major countries, including the UK and France as notable examples. She comes off well in comparison to Trump, although the comparison is not completely fair. Trump only has the bully pulpit to work with, for one thing: actual power is wielded by governors. But Trump’s use of the bully pulpit has been poor. Moreover, he has deferred far too much to execrable “experts,” most notably the slippery Dr. Fauci, who has been on the opposite sides of every policy decision (Masks? Yes! Masks? No! Crisis? Yes! Crisis? No!), is utterly incapable of and in fact disdainful of balancing health vs. economics and liberty, and who brings to the table a record of failure that Neil Ferguson could envy, for its duration if nothing else. The Peter Principle personified: he is clearly at the level of his incompetence, and due to the perversity of government, has remained at that level for decades.

Merkel’s performance is particulary outstanding when compared to those who wield the real power in the current crisis, American governors, especially those like Whittmer, Pritzker, Evers, Walz, Brown, Wolf, Cuomo, Murphy, Northam, and Newsom. These people are goalpost movers par excellence, and quite clearly find the unfettered exercise of power to be orgasmic.

It is embarrassing in the extreme to see the Germans–the Germans–be far more solicitous of freedom and choice than elected American officials, who seem to treat freedom–including the freedom to earn a livelihood–as an outrageous intrusion on their power and amour-propre.

Will this represent the new normal? Will SWP props for Moore, Merkel, and Musk become routine in the post- (hopefully) Covid era? I doubt it, but for today, I’m happy to give credit where credit is due.

Print Friendly, PDF & Email

May 11, 2020

Imperial Should Have Called Winston Wolf

Filed under: CoronaCrisis,Economics,Politics,Regulation — cpirrong @ 3:09 pm

In the film Pulp Fiction, moronic hoodlums Jules (Samuel L. Jackson) and Vincent (John Travolta) pick up a guy who had stolen a briefcase from the back of their boss Marcellus Wallace’s car. While driving him away, Vincent accidentally shoots him, leaving the back of the car splattered with blood and brains. In a panic, they drive to friend Jimmy Dimmick’s (Quentin Tarantino’s) house. Dimmick tells them his wife will be home in an hour and they can’t stay. In a panic they call Wallace, who calls in Winston Wolf. Wolf says: “It’s an hour away. I’ll be there in 10 minutes.” In 9 minutes and 37 seconds, Wolf’s car squeals to a halt in front of Jimmy’s house. Wolf rings the doorbell, and when Jimmy answers, Wolf says: “I’m Winston Wolf. I solve problems.” Within 40 minutes, Wolf solves Jules’ and Vincent’s problem. The car is cleaned up with the body is in the trunk, ready to be driven to the wrecking yard to be crushed.

The Imperial team that relied on Microsoft/Github to fix its code should have called Winston Wolf instead, because MS/Github left behind some rather messy evidence. “Sue Denim,” who wrote the code analysis I linked to yesterday, has a follow up describing what Not Winston Wolf left behind:

The hidden history. Someone realised they could unexpectedly recover parts of the deleted history from GitHub, meaning we now have an audit log of changes dating back to April 1st. This is still not exactly the original code Ferguson ran, but it’s significantly closer.

Sadly it shows that Imperial have been making some false statements.

I don’t quite know what to make of this. Originally I thought these claims were a result of the academics not understanding the tools they’re working with, but the Microsoft employees helping them are actually employees of a recently acquired company: GitHub. GitHub is the service they’re using to distribute the source code and files. To defend this I’d have to argue that GitHub employees don’t understand how to use GitHub, which is implausible.

I don’t think anyone involved here has any ill intent, but it seems via a chain of innocent yet compounding errors – likely trying to avoid exactly the kind of peer review they’re now getting – they have ended up making false claims in public about their work.

My favorite one is “a fix for a critical error in the random number generator.” In 2020? WTF? I remember reading in 1987 in the book Numerical Recipes by  William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery a statement to the effect that libraries could be filled with papers based on faulty random number generation. (I’d give you the exact quote, but the first edition that I used is in my office which I cannot access right now. Why is that, I wonder?). And they were using a defective RNG 33 years later? Really?

“Algorithmic errors” is another eye popper. The algorithms weren’t doing what they were supposed to?

Read the rest. And maybe you’ll conclude that this was a mess that even Winston Wolf could have cleaned up in 40 days, let alone 40 minutes.

Print Friendly, PDF & Email

May 10, 2020

Code Violation: Other Than That, How Was the Play, Mrs. Lincoln?

Filed under: CoronaCrisis,Economics,Politics,Regulation — cpirrong @ 3:03 pm

By far the most important model in the world has been the Imperial College epidemiological model. Largely on the basis of the predictions of this model, nations have been locked down. The UK had been planning to follow a strategy very similar to Sweden’s until the Imperial model stampeded the media, and then the government, into a panic. Imperial predictions regarding the US also contributed to the panicdemic in the US.

These predictions have proved to be farcically wrong, with deaths tolls exaggerated by one and perhaps two orders of magnitude.

Models only become science when tested against data/experiment. By that standard, the Imperial College model failed spectacularly.

Whoops! What’s a few trillions of dollars, right?

I was suspicious of this model from the first. Not only because of its doomsday predictions and the failures of previous models produced by Imperial and the leader of its team, Neil Ferguson. But because of my general skepticism about big models (as @soncharm used to say, “all large calculations are wrong”), and most importantly, because Imperial failed to disclose its code. That is a HUGE red flag. Why were they hiding?

And how right that was. A version of the code has been released, and it is a hot mess. It has more bugs than east Africa does right now.

This is one code review. Biggest take away: due to bugs in the code, the model results are not reproducible. The code itself introduces random variation in the model. That means that runs with the same inputs generate different outputs.

Are you fucking kidding me?

Reproducibility is the essence of science. A model whose predictions can not be reproduced, let alone empirical results based on that model, is so much crap. It is the antithesis of science.

After tweeting about the code review article linked above, I received feedback from other individuals with domain expertise who had reviewed the code. They concur, and if anything, the article understates the problems.

Here’s one article by an interlocutor:

The Covid-19 function variations aren’t stochastic. They’re a bug caused by poor management of threads in the code. This causes a random variation, so multiple runs give different results. The response from the team at Imperial is that they run it multiple times and take an average. But this is wrong. Because the results should be identical each time. Including the buggy results as well as the correct ones means that the results are an average of the correct and the buggy ones. And so wouldn’t match the expected results if you did the same calculation by hand.

As an aside, we can’t even do the calculations by hand, because there is no specification for the function, so whether the code is even doing what it is supposed to do is impossible to tell. We should be able to take the specification and write our own tests and check the results. Without that, the code is worthless.

I repeat: “the code is worthless.”

Another correspondent confirmed the evaluations of the bugginess of the code, and added an important detail about the underlying model itself:

I spent 3 days reviewing his code last week. It’s an ugly mess of thousands of lines of C (not C++). There are hundreds of input parameters (not counting the fact it models population density to 1km x 1km cells) and 4 different infection mechanisms. It made me feel quite ill.

Hundreds of input parameters–another huge red flag. I replied:

How do you estimate 100s of parameters? Sounds like a climate model . . . .

The response:

Yes. It shares the exact same philosophy as a GCM – model everything, but badly.

I recalled a saying of von Neumann: “With four parameters I can fit an elephant, with five I can make him wiggle his trunk.” Any highly parameterized model is IMMEDIATELY suspect. With so many parameters–hundreds!–overfitting is a massive problem. Moreover, you are highly unlikely to have the data to estimate these parameters, so some are inevitably set a priori. This high dimensionality means that you have no clue whatsoever what is driving your results.

This relates to another comment:

No discussion of comparative statics.

So again, you have no idea what is driving the results, and how changes in the inputs or parameters will change predictions. So how do you use such a model to devise policies, which is inherently an exercise in comparative statics? So as not to leave you in suspense: YOU CAN’T.

This is particularly damning:

And also the time resolution. The infection model time steps are 6 hours. I think these models are designed more for CYA. It’s bottom-up micro-modelling which is easier to explain and justify to politicos than a more physically realistic macro level model with fewer parameters.

To summarize: these models are absolute crap. Bad code. Bad methodology. Farcical results.

Other than that, how was the play, Mrs. Lincoln?

But it gets better!

The code that was reviewed in the first-linked article . . . had been cleaned up! It’s not the actual code used to make the original predictions. Instead, people from Microsoft spent a month trying to fix it–and it was still as buggy as Kenya. (I note in passing that Bill Gates is a major encourager of panic and lockdown, so the participation of a Microsoft team here is quite telling.)

The code was originally in C, and then upgraded to C++. Well, it could be worse. It could have been Cobol or Fortran–though one of those reviewing the code suggested: “Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.”

All in all, this appears to be the epitome of bad modeling and coding practice. Code that grew like weeds over years. Code lacking adequate documentation and version control. Code based on overcomplicated and essentially untestable models.

But it gets even better! The leader of the Imperial team, the aforementioned Ferguson, was caught with his pants down–literally–canoodling with his (married) girlfriend in violation of the lockdown rules for which HE was largely responsible. This story gave versimilitude to my tweet of several days before that story broke:

It would be funny, if the cost–in lives and livelihoods irreparably damaged, and in lives lost–weren’t so huge.

And on such completely defective foundations policy castles have been built. Policies that have turned the world upside down.

Of course I blame Ferguson and Imperial. But the UK government also deserves severe criticism. How could they spend vast sums on a model, and base policies on a model, that was fundamentally and irretrievably flawed? How could they permit Imperial to make its Wizard of Oz pronouncements without requiring a release of the code that would allow knowledgeable people to look behind the curtain? They should have had experienced coders and software engineers and modelers go over this with a fine-tooth comb. But they didn’t. They accepted the authority of the Pants-less Wizard.

And how could American policymakers base any decision–even in the slightest–on the basis of a pig in a poke? (And saying that it is as ugly as a pig is a grave insult to pigs.)

If this doesn’t make you angry, you are incapable of anger. Or you are an idiot. There is no third choice.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress