Streetwise Professor

May 20, 2020

Whoops! WTI Didn’t Do It Again, or, Lightning Strikes Once

The June 2020 WTI contract expired with a whimper rather than a bang yesterday, thereby not repeating the cluster of the May contract expiry. In contrast to the back-to-back 40 standard deviation moves in April, June prices exhibited little volatility Monday or Tuesday. Moreover, calendar spreads were in a modest contango–in contrast to the galactangos experienced in April, and prices never got within miles of negative territory.

Stronger fundamentals certainly played a role in this uneventful expiry. Glimmers of rebounding demand, and sharp supply reductions, both in the US and internationally, caused a substantial rally in flat prices and tightening of spreads in the first weeks of May. This alleviated fears about exhaustion of storage capacity. Indeed, the last EIA storage number for Cushing showed a draw, and today’s API number suggests an even bigger draw this week. (Though I must say I am skeptical about the forecast power of API numbers.). Also, the number of crude carriers chartered for storage has dropped. (H/T my daughter’s market commentary from yesterday). So the dire fundamental conditions that set the stage for that storm of negativity were not nearly so dire this week.

But remember that fundamentals only set the stage. As I pointed out in my posts in the immediate aftermath of the April chaos, technical factors related to the liquidation of the May contract, arguably manipulative in nature, the ultimate cause of the huge price drop on the penultimate trading day, and the almost equally large rebound on the expiry day.

The CFTC read the riot act in a letter to exchanges, clearinghouses, and FCMs last week. No doubt the CME, despite it’s Frank Drebin-like “move on, nothing to see here” response to the May expiry monitored the June expiration closely, and put a lot of pressure on those with open short positions to bid the market aggressively (e.g., bid at reasonable differentials to Brent futures and cash market prices). A combination of that pressure, plus the self-protective measures of market participants who didn’t want to get caught in another catastrophe, clearly led to earlier liquidations: open interest going into the last couple of days was well below the level at a comparable date in the May.

So fundamentals, plus everyone being on their best behavior, prevented a recurrence of the May fiasco.

It should be noted that as bad as April 20 was (and April 21, too), the carnage was not contained to those days, and the May contract alone. The negative price shock, and its potentially disastrous consequences for “fully collateralized” long-only funds, like the USO, led to a substantial early rolls of long positions in the June during the last days of April. Given the already thin liquidity in the market, these rolls caused big movements in calendar spreads–movements that have been completely reversed. On 27 April, the MN0 spread was -$14.45: it went off the board at a 54 cent backwardation. Yes, fundamentals were a major driver of that tightening, but the early roll in the US (and some other funds) triggered by the May expiration clearly exacerbated the contango. Collateral damage, as it were.

What is the takeaway from all this? Well, I think the major takeaway is not to overgeneralize from what happened on 20-21 April. The underlying fundamentals were truly exceptional (unprecedented, really)–and hopefully the likelihood of a repeat of those is vanishingly small. Moreover, the CME should be on alert for any future liquidation-related game playing, and market players will no doubt be more cautious in their approach to expiration. It would definitely be overlearning from the episode to draw expansive conclusions about the overall viability of the WTI contract, or its basic delivery mechanism.

That mechanism is supported by abundant physical supplies and connections to diverse production and consumption regions. Indeed, this was a situation where the problem was extremely abundant supply–which is an extreme rarity in physical commodity futures markets. Other contracts (Brent in particular) have chronic problems with inadequate and declining supply. As for WTI being “landlocked,” er, there are pipelines connecting Cushing to the Gulf, and WTI from Cushing has been exported around the world in recent years. With the marginal barrel going for export, seaborne crude prices drive WTI. With a better-monitored and managed liquidation process, especially in extraordinary circumstances, the WTI delivery mechanism is pretty good. And I say that as someone who has studied delivery mechanisms for around 30 years, and has designed or consulted on the design of these contracts.

Print Friendly, PDF & Email

May 14, 2020

Strange New Respect

Filed under: Climate Change,CoronaCrisis,Economics,Energy,Politics,Regulation,Tesla — cpirrong @ 5:50 pm

The past few weeks have brought pleasant surprises from people whom I usually disagree with and/or dislike.

For one, Michael Moore, the executive producer of Planet of the Humans. Moore does not appear on camera: that falls to Jeff Gibbs and (producer) Ozzie Zehner. The main virtue of the film is its evisceration of “green energy,” including wind and solar. It notes repeatedly that the unreliability of these sources of power makes them dependent on fossil fuel generation, and in some cases results in the consumption of more fossil fuels than would be the case if the renewables did not exist at all. Further, it points out-vividly-the dirty processes involved with creating wind and solar, most notably mining. The issues of disposing of derelict wind and solar facilities are touched on too, though that could have been beefed up some.

If you know about wind and solar, these things are hardly news to you. But for environmentalists to acknowledge that reality, and criticize green icons for perpetrating frauds in promoting these wildly inefficient forms of energy, is news.

The most important part of the film is its brutal look at biomass. It makes two points. First, that although green power advocates usually talk about wind and solar, much of the actual “renewable” energy is produced by biomass, e.g., burning woodchips. In other words, it exposes the bait-and-switch huckersterism behind a lot of green energy promotion. You thought you were getting windmills? Sucker: you’re getting plants that burn down forests. You fucked up! You trusted us!

Second, that biomass is hardly renewable (hence the quote marks above), and results in huge environmental damage. Yes, trees can regrow, but not as fast as biomass plants burn them. Moreover, the destruction of forests is truly devastating to wildlife and to irreplaceable habitats, and to the ostensible purpose of renewables–reduction of CO2.

The film also points out the massive corporate involvement in green energy, and this represents its weakest point. Corporations, like bank robbers, go where the money is. But that begs the question: Why is there money in horribly inefficient renewables? Answer: Because of government subsidies.

Alas, the movie only touches briefly on this reality. Perhaps that is a bridge too far for socialists like Moore. But he (and Gibbs and Zehner) really want to stop what they rightly view as the environmental and economic folly of renewables, they have to turn off the money tap. That requires attacking the government-corporate-environmentalist iron triangle on all three sides, not just two.

I am not a believer in the underlying premise of the movie, viz., that there are too many people consuming too much stuff, and if we don’t reduce people and how much they consume, the planet will collapse. That’s a dubious neo-Malthusian mindset. But put that aside. It’s a great thing that even hard core environmentalists call bull on the monstrosity that is green/renewable energy, and point out the hypocrisy and fundamental dishonesty of those who hype it.

My second candidate is long-time target Elon Musk. He has come out as a vocal opponent to lockdowns, and a vocal advocate for liberty.

Now I know that Elon is talking his book. Especially with competitors starting up their plants in the Midwest, the lockdown in California that has idled Musk’s Fremont manufacturing facility is costing Tesla money. But whatever. The point is that he is forcefully pointing out the huge economic costs of lockdowns, and their immense detrimental impact on personal liberty earns him some newfound respect, strange or otherwise.

Lastly, Angela Merkel. She has taken a much more balanced approach to Covid-19 than most other national leaders. Perhaps most importantly, she has clearly been trying to navigate the tradeoff between health, economic well-being, and liberty. Rather than moving the goalposts when previous criteria for evaluating lockdowns had been met, when it became clear that the epidemic was not as severe in Germany as had been feared, and that the economic consequences were huge, and that children were neither potential sufferers or spreaders, she pivoted to reopening quickly and pretty rationally.

The same cannot be said in other major countries, including the UK and France as notable examples. She comes off well in comparison to Trump, although the comparison is not completely fair. Trump only has the bully pulpit to work with, for one thing: actual power is wielded by governors. But Trump’s use of the bully pulpit has been poor. Moreover, he has deferred far too much to execrable “experts,” most notably the slippery Dr. Fauci, who has been on the opposite sides of every policy decision (Masks? Yes! Masks? No! Crisis? Yes! Crisis? No!), is utterly incapable of and in fact disdainful of balancing health vs. economics and liberty, and who brings to the table a record of failure that Neil Ferguson could envy, for its duration if nothing else. The Peter Principle personified: he is clearly at the level of his incompetence, and due to the perversity of government, has remained at that level for decades.

Merkel’s performance is particulary outstanding when compared to those who wield the real power in the current crisis, American governors, especially those like Whittmer, Pritzker, Evers, Walz, Brown, Wolf, Cuomo, Murphy, Northam, and Newsom. These people are goalpost movers par excellence, and quite clearly find the unfettered exercise of power to be orgasmic.

It is embarrassing in the extreme to see the Germans–the Germans–be far more solicitous of freedom and choice than elected American officials, who seem to treat freedom–including the freedom to earn a livelihood–as an outrageous intrusion on their power and amour-propre.

Will this represent the new normal? Will SWP props for Moore, Merkel, and Musk become routine in the post- (hopefully) Covid era? I doubt it, but for today, I’m happy to give credit where credit is due.

Print Friendly, PDF & Email

May 11, 2020

Imperial Should Have Called Winston Wolf

Filed under: CoronaCrisis,Economics,Politics,Regulation — cpirrong @ 3:09 pm

In the film Pulp Fiction, moronic hoodlums Jules (Samuel L. Jackson) and Vincent (John Travolta) pick up a guy who had stolen a briefcase from the back of their boss Marcellus Wallace’s car. While driving him away, Vincent accidentally shoots him, leaving the back of the car splattered with blood and brains. In a panic, they drive to friend Jimmy Dimmick’s (Quentin Tarantino’s) house. Dimmick tells them his wife will be home in an hour and they can’t stay. In a panic they call Wallace, who calls in Winston Wolf. Wolf says: “It’s an hour away. I’ll be there in 10 minutes.” In 9 minutes and 37 seconds, Wolf’s car squeals to a halt in front of Jimmy’s house. Wolf rings the doorbell, and when Jimmy answers, Wolf says: “I’m Winston Wolf. I solve problems.” Within 40 minutes, Wolf solves Jules’ and Vincent’s problem. The car is cleaned up with the body is in the trunk, ready to be driven to the wrecking yard to be crushed.

The Imperial team that relied on Microsoft/Github to fix its code should have called Winston Wolf instead, because MS/Github left behind some rather messy evidence. “Sue Denim,” who wrote the code analysis I linked to yesterday, has a follow up describing what Not Winston Wolf left behind:

The hidden history. Someone realised they could unexpectedly recover parts of the deleted history from GitHub, meaning we now have an audit log of changes dating back to April 1st. This is still not exactly the original code Ferguson ran, but it’s significantly closer.

Sadly it shows that Imperial have been making some false statements.

I don’t quite know what to make of this. Originally I thought these claims were a result of the academics not understanding the tools they’re working with, but the Microsoft employees helping them are actually employees of a recently acquired company: GitHub. GitHub is the service they’re using to distribute the source code and files. To defend this I’d have to argue that GitHub employees don’t understand how to use GitHub, which is implausible.

I don’t think anyone involved here has any ill intent, but it seems via a chain of innocent yet compounding errors – likely trying to avoid exactly the kind of peer review they’re now getting – they have ended up making false claims in public about their work.

My favorite one is “a fix for a critical error in the random number generator.” In 2020? WTF? I remember reading in 1987 in the book Numerical Recipes by  William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery a statement to the effect that libraries could be filled with papers based on faulty random number generation. (I’d give you the exact quote, but the first edition that I used is in my office which I cannot access right now. Why is that, I wonder?). And they were using a defective RNG 33 years later? Really?

“Algorithmic errors” is another eye popper. The algorithms weren’t doing what they were supposed to?

Read the rest. And maybe you’ll conclude that this was a mess that even Winston Wolf could have cleaned up in 40 days, let alone 40 minutes.

Print Friendly, PDF & Email

May 10, 2020

Code Violation: Other Than That, How Was the Play, Mrs. Lincoln?

Filed under: CoronaCrisis,Economics,Politics,Regulation — cpirrong @ 3:03 pm

By far the most important model in the world has been the Imperial College epidemiological model. Largely on the basis of the predictions of this model, nations have been locked down. The UK had been planning to follow a strategy very similar to Sweden’s until the Imperial model stampeded the media, and then the government, into a panic. Imperial predictions regarding the US also contributed to the panicdemic in the US.

These predictions have proved to be farcically wrong, with deaths tolls exaggerated by one and perhaps two orders of magnitude.

Models only become science when tested against data/experiment. By that standard, the Imperial College model failed spectacularly.

Whoops! What’s a few trillions of dollars, right?

I was suspicious of this model from the first. Not only because of its doomsday predictions and the failures of previous models produced by Imperial and the leader of its team, Neil Ferguson. But because of my general skepticism about big models (as @soncharm used to say, “all large calculations are wrong”), and most importantly, because Imperial failed to disclose its code. That is a HUGE red flag. Why were they hiding?

And how right that was. A version of the code has been released, and it is a hot mess. It has more bugs than east Africa does right now.

This is one code review. Biggest take away: due to bugs in the code, the model results are not reproducible. The code itself introduces random variation in the model. That means that runs with the same inputs generate different outputs.

Are you fucking kidding me?

Reproducibility is the essence of science. A model whose predictions can not be reproduced, let alone empirical results based on that model, is so much crap. It is the antithesis of science.

After tweeting about the code review article linked above, I received feedback from other individuals with domain expertise who had reviewed the code. They concur, and if anything, the article understates the problems.

Here’s one article by an interlocutor:

The Covid-19 function variations aren’t stochastic. They’re a bug caused by poor management of threads in the code. This causes a random variation, so multiple runs give different results. The response from the team at Imperial is that they run it multiple times and take an average. But this is wrong. Because the results should be identical each time. Including the buggy results as well as the correct ones means that the results are an average of the correct and the buggy ones. And so wouldn’t match the expected results if you did the same calculation by hand.

As an aside, we can’t even do the calculations by hand, because there is no specification for the function, so whether the code is even doing what it is supposed to do is impossible to tell. We should be able to take the specification and write our own tests and check the results. Without that, the code is worthless.

I repeat: “the code is worthless.”

Another correspondent confirmed the evaluations of the bugginess of the code, and added an important detail about the underlying model itself:

I spent 3 days reviewing his code last week. It’s an ugly mess of thousands of lines of C (not C++). There are hundreds of input parameters (not counting the fact it models population density to 1km x 1km cells) and 4 different infection mechanisms. It made me feel quite ill.

Hundreds of input parameters–another huge red flag. I replied:

How do you estimate 100s of parameters? Sounds like a climate model . . . .

The response:

Yes. It shares the exact same philosophy as a GCM – model everything, but badly.

I recalled a saying of von Neumann: “With four parameters I can fit an elephant, with five I can make him wiggle his trunk.” Any highly parameterized model is IMMEDIATELY suspect. With so many parameters–hundreds!–overfitting is a massive problem. Moreover, you are highly unlikely to have the data to estimate these parameters, so some are inevitably set a priori. This high dimensionality means that you have no clue whatsoever what is driving your results.

This relates to another comment:

No discussion of comparative statics.

So again, you have no idea what is driving the results, and how changes in the inputs or parameters will change predictions. So how do you use such a model to devise policies, which is inherently an exercise in comparative statics? So as not to leave you in suspense: YOU CAN’T.

This is particularly damning:

And also the time resolution. The infection model time steps are 6 hours. I think these models are designed more for CYA. It’s bottom-up micro-modelling which is easier to explain and justify to politicos than a more physically realistic macro level model with fewer parameters.

To summarize: these models are absolute crap. Bad code. Bad methodology. Farcical results.

Other than that, how was the play, Mrs. Lincoln?

But it gets better!

The code that was reviewed in the first-linked article . . . had been cleaned up! It’s not the actual code used to make the original predictions. Instead, people from Microsoft spent a month trying to fix it–and it was still as buggy as Kenya. (I note in passing that Bill Gates is a major encourager of panic and lockdown, so the participation of a Microsoft team here is quite telling.)

The code was originally in C, and then upgraded to C++. Well, it could be worse. It could have been Cobol or Fortran–though one of those reviewing the code suggested: “Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.”

All in all, this appears to be the epitome of bad modeling and coding practice. Code that grew like weeds over years. Code lacking adequate documentation and version control. Code based on overcomplicated and essentially untestable models.

But it gets even better! The leader of the Imperial team, the aforementioned Ferguson, was caught with his pants down–literally–canoodling with his (married) girlfriend in violation of the lockdown rules for which HE was largely responsible. This story gave versimilitude to my tweet of several days before that story broke:

It would be funny, if the cost–in lives and livelihoods irreparably damaged, and in lives lost–weren’t so huge.

And on such completely defective foundations policy castles have been built. Policies that have turned the world upside down.

Of course I blame Ferguson and Imperial. But the UK government also deserves severe criticism. How could they spend vast sums on a model, and base policies on a model, that was fundamentally and irretrievably flawed? How could they permit Imperial to make its Wizard of Oz pronouncements without requiring a release of the code that would allow knowledgeable people to look behind the curtain? They should have had experienced coders and software engineers and modelers go over this with a fine-tooth comb. But they didn’t. They accepted the authority of the Pants-less Wizard.

And how could American policymakers base any decision–even in the slightest–on the basis of a pig in a poke? (And saying that it is as ugly as a pig is a grave insult to pigs.)

If this doesn’t make you angry, you are incapable of anger. Or you are an idiot. There is no third choice.

Print Friendly, PDF & Email

April 30, 2020

WTI-WTF? Part 3: Did CLK20 Get TAS-ed?

Matt Levine wrote a typically amusing piece highlighting the role of Trade at Settle (TAS) contracts in the 4/20/20 oil futures debacle:

But actually a lot of oil changed hands at those negative prices. Not because a bunch of investors came to the market all at once looking to sell, and no one would buy from them at negative prices, but for a more technical reason. Some oil traders use “trade-at-settlement” contracts: Instead of buying (or selling) oil futures at the market price at the time of your trade, you agree in advance to buy (or sell) them at whatever the official 2:30 p.m. settlement price is that day. 1  This is a good trade, for you, if your job is to obtain the day’s settlement price: For instance, if you run an index fund or exchange-traded fund that is benchmarked to that price, using TAS futures guarantees you the benchmark price. If you invest in oil futures and your boss fires you if you miss the benchmark, you might use TAS futures, that sort of thing. If you are a savvy oil trader attuned to minute-by-minute changes in supply and demand and trying to capture as much value as possible from your skills, you’ll probably just trade the futures at their current prices, selling if the price is too high and buying if it’s too low. But a lot of oil traders are doing something else, something a bit more passive, and for them the ability to guarantee the settlement price is useful.

He goes on to ponder whether the TAS mechanism could be manipulated, and whether manipulation could have contributed to the settlement fire that Red Adair couldn’t have put out:

The basic pattern—agree in advance to buy (sell) stuff at the official settlement price at some fixed future time, and then sell (buy) a bunch of that stuff in the minutes leading up to the official settlement time with the effect of pushing down (up) the price at which you are buying (selling)—is incredibly common, and the gradation from “sensibly pre-hedging the exposure you will get at settlement” to “sloppily pre-hedging the exposure you will get at settlement” to “manipulating the market to push down the price you will get at settlement” is blurry. If you type in a chat room “lol I’m gonna pound out 500 contracts to push down the settlement price and make fortune on my TAS trades, I am really ripping those muppets’ faces off, hope I don’t go to prison bro, hashtag fraud hashtag crime hashtag manipulation,” you will get in trouble. But if you don’t type that, and you quietly sell the 500 contracts and the price goes down, then as far as anyone knows that was just pre-hedging.

So could somebody have popped CLK20 with a TASer last Monday?

Funny you should ask. I wrote a paper on TAS manipulation a while ago. My interest was sparked by the CFTC’s action against Dutch trading firm Optiver, which the agency accused of doing exactly the kind of thing Levine writes about in crude, gasoline, and heating oil futures back in March, 2008. You can read the complaint–and listen to some actual “ripping those muppets’ faces off” trader braggadocio.

How does manipulation work here? First, to make manipulation profitable, there has to be an asymmetric price response to purchases and sales. If the manipulator’s purchases impact prices the same as sales, just buying and selling a lot can’t move prices in a profitable direction. Indeed, the manipulator would have to pay transactions costs (crossing the spread, brokerage, etc.) and this would cause the trading to be unprofitable.

The model in the paper derives conditions under which purchases and sales of TAS have a smaller impact on prices than do trades in the underlying futures. The basic idea is that if information is short-lived, or if there is intense competition among informed traders, new information will be incorporated into prices very quickly. Under those circumstances, informed traders will not want to trade TAS: their information will already be incorporated into the price by the time settlement occurs. Thus, TAS is a mechanism that allows traders to signal that they are uninformed: many “muppets” choose to trade TAS, and the informed don’t. Thus, the price impact of TAS trades is smaller than the price impact of regular outright trades: trades move prices because of the possibility that they are motivated by private information, so trades that are unlikely to be privately informed move prices less than trades that are more likely to be so.

This creates the asymmetry that makes manipulation possible: the manipulator buys, say, the TAS and then sells in quantity immediately before and during the settlement period and profits as Levine describes.

This type of manipulation is particularly pernicious because manipulative trades have persistent price impacts because they cannot be distinguished from informed trades (or liquidity trades, for that matter). Note that in Optiver, prices did not reverse after the firm’s trades,

This strategy is likely to be particularly profitable when markets are relatively illiquid, as in an illiquid market outright trades have bigger price impacts. Liquidity (measured by the bid-ask spread, quantity at the top of the book, price impact coefficients, etc.) has plummeted for everything since the CovidCrisis began, and the decline in CL liquidity has been particularly pronounced. Moreover, contracts close to expiration are less liquid anyways. Add to this the extreme physical constraints (which mean that small shocks to fundamentals have big price impacts) and the raging uncertainty about the logistical situation at Cushing, and it is likely that small volumes at the settle could have big impacts on prices.

To this I would add that an unexpected shortfall in buy orders (due to shorts exercising market power) at the settle could have price impacts, and exacerbate the price impacts of sell orders by exacerbating order imbalances.

Thus, the potential for a big asymmetry in price impact was pronounced on that fatal Monday.

In sum, it is not implausible that the market did indeed get TASed. Or at the very least, a TASer jolt contributed to the collapse. (Sort of like in this video!)

A final remark on the economic benefits and costs of TAS trading. TAS is a form of “cream skimming”–i.e., the skimming off of uninformed order flow. This tends to make the order flow in the regular continuous market more toxic, which reduces liquidity in that market. For this reason, other cream skimming mechanisms used primarily in equity markets (payment for order flow, dark pools, block trades) are frequently criticized. (This is why some regulators, particular in Europe, have attempted to curb such activity and force more trading into “lit” venues.)

I showed in my Market Macrostructure paper that things aren’t so simple. If the regular market isn’t perfectly competitive, the increased competition from a cream skimming mechanism can improve welfare. Moreover, there are distributive effects here: the uninformed traders who can utilize the TAS mechanism (e.g., those who are hedging exposures tied to the settlement price) benefit, while uninformed traders who can’t lose. Informed traders can lose too. Moreover–and this is a point that is almost always overlooked–some informed trading is essentially rent seeking (e.g., trading on information that will be released shortly anyways, and accelerating its incorporation into prices has little effects on resource allocation decisions). Reducing rent seeking informed trading is a good thing.

All in all, the role of the TASer is yet another piece of the 4/20/20 WTI WTF puzzle. The forensic analysis of this entire episode will be fascinating.

Print Friendly, PDF & Email

April 21, 2020

WTI-WTF? Part II (of How Many???)

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Regulation — cpirrong @ 2:23 pm

Just another day at the Globex, folks. May WTI up a mere $49.88 on its last trading day at the time I write this paragraph, a while before the close. (Sorry, can’t calculate a percentage change . . . because the base number is negative!) That’s just sick. But at least it’s positive! ($12.25. No, $9.96. No . . .) (This reminds me of a story from Black Monday. My firm did a little index arb. We called the floor to get a price quote on the 19th. Our floor guy said “On this part of the pit it’s X. Over there it’s X+50. Over there it’s X-20. I have no fucking idea what the fucking price is.”)

But June has been crushed–down $7.35 (about 35 percent). Now the May-June spread is a mere $.83 contango. That makes as little sense as yesterday’s settling galactic contango (galactango!) of $57.06. (Note that June-July is trading at at $7.71 and July-August at $2.65.

I’m guessing that dynamic circuit breakers are impeding price movements, meaning that the prices we see are not necessarily market clearing prices at that instant.

A few follow-ons to yesterday’s post.

First, the modeling of the dynamics of a contract as it approaches expiration when the delivery supply/demand curve is inelastic, and some traders might have positions large enough to exploit those conditions to exercise market power, is extremely complicated. The only examples I am aware of are Cooper and Donaldson in the JFQA almost 30 years ago, and my paper in the Journal of Alternative Investments almost a decade ago.

Futures markets are (shockingly!) forward looking. Expectations and beliefs matter. There are coordination problems. If I believe everyone else on my side of the market is going to liquidate prior to expiration, I realize that the party on the other side of the contract will have no market power at expiration. So I should defer liquidating–which if everyone reasons the same way could lead to everyone getting caught in a long or sort manipulation at expiration. Or, if I believe everyone is going to stick it out to the end, I should get out earlier (which if everybody else does the same results in a stampede for the exits.)

In these situations, anything can happen, and the process of coordinating expectations and actions is likely to be chaotic. Cooper-Donaldson and Pirrong lay out some plausible stories (based on particular specifications of beliefs and the trading mechanism), but they are not the only stories. They mainly serve to highlight how game theoretic considerations can lead to very complex outcomes in situations with market power and inelasticity.

One thing that is sure is that these game theoretic considerations don’t matter much if the elasticities of delivery supply and demand are large. Then no individual can distort prices very much by delivering too much or taking delivery of too much. Then the coordination and expectations problems aren’t so relevant. However, when delivery supply or demand curves are very steep–as is the case in Cushing now due to the storage constraint–they become extremely relevant.

Perhaps one analogy is getting out of a theater. When there are many exits, there won’t be queues to get out and little chance of tragedy even if someone yells “fire.” If there is only one exit, however, hurried attempts of everyone to leave at once can lead to catastrophe. Moreover, perverse crowd dynamics occur in such situations. That’s where we were yesterday.

About 90 percent of open interest liquidated yesterday. That is why today is returning to some semblance of normality–the exit isn’t so crowded (because so many got trampled yesterday). But that begs the question of why the panicked rush yesterday? That’s where the game theoretic “anything could happen” answer is about the best we can do.

About that storage constraint. My post yesterday focused on someone with a large short futures position raising the specter of excessive deliveries by not liquidating that position, thereby triggering a cascade of descending offers until the short graciously accepted at a highly profitable price.

But there is another market power play possible here. A firm controlling storage could crash prices (and spreads) by withholding that capacity from the market. The most recent data from the EIA indicates about 55 mm bbl of oil storage at Cushing. That’s about 80 percent of nameplate capacity (also per EIA.). Due to operational constraints (e.g., need working space to move barrels in and out; can’t mix different grades in the same tank) that’s probably effectively full. Therefore, someone with ownership of a modest amount of space could withhold it drive up the spread. If that party had on a bull spread position . . .

Third, we are into Round Up the Usual Suspects mode:

And first in line is the US Oil ETF. There has been a lot of idiotic commentary about this. They were forced to take delivery! (Er, delivery notices aren’t possible before trading ends.) They were forced to dump huge numbers of contracts yesterday! (Er, they publish a regular roll schedule, and were out of the May a week before yesterday’s holocaust. They also report positions daily, and as of yesterday were 100 pct in the June.)

Not to say that USO can be implicated in hinky things going on in the June right now, but as for May–that dog don’t hunt.

Fourth–WTF, June WTI? Well, my best explanation is that the carnage in the May served to concentrate minds regarding June. No doubt risk managers, or risk systems, forced some longs out as the measured and perceived risk for June shot up yesterday. Others just decided that discretion was the better part of valor. The extremely unsettled positions no doubt impaired liquidity (i.e., just as some wanted to get out, others were constrained by risk limits formal or informal from getting in), leading to big price movements in response to these flows. If that’s a correct diagnosis, we should see something of a bounceback, but perhaps not too much given the perception (and reality) of an extremely asymmetric risk profile, with going into expiry short being a lot more dangerous than going into it long. (This is why expectations about future conditions at delivery can impact prices well before delivery.)

Fifth, on a personal note, in an illustration of the adage that the apple doesn’t fall far from the tree (and also of Merton’s Law of Multiples) my elder daughter Renee completely independently of me used “WTI WTF” in her daily market commentary yesterday. I’m so proud! She also raised the possibility of negative prices some time ago. Good call!

And I finish this just in time to bring you the final results. CLK goes off the board settling at $10.01, up a mere $47.64. CLM settles at $11.57, down -$8.86. The closing KM20 spread, $1.56.

Someday we’ll look back on this and . . . . Well, we’ll look back on it, anyways.

Print Friendly, PDF & Email

April 20, 2020

WTI–WTF?

Today was one of the most epochal days in the history of oil trading, which is saying something. The front month May contract–which expires tomorrow (21 April, 2020)–(a) settled at a negative price of -$37.63, (b) declined $55.90 from the previous day’s settlement, and (c) exhibited a trading range of $58.17, which is about 3.5x Friday’s settlement price.

Even these eye-popping numbers don’t tell the full story. The last traded price was -$13.10. Note that the settlement price is based on the volume weighted average price during the last two minutes of trading, so an average price of $-37.63 in the last two minutes and a last traded price of -$13.10 means prices moved more that $25/bbl in these minutes.

And we’re not done yet! As I write at 1841 CDT, the price is up to -$5.00–an increase of $32.63.

That there’s what they call volatility, folks.

To put these numbers in perspective, the largest trading range on the any day of the last three trading days of CL contracts from 2000-2019 is $26.65, a day around the time of the financial crisis and the aftermath of Hurricane Ike (October 2008 contract): here’s what I wrote about that event. The median intra-day range on the last three days is $1.6, the mean is $1.5, and the standard deviation is $1.47. So we are around a 40 standard deviation event here.

The largest daily price change during the last 3 trading days is $16.37, with a median of $.73 and a standard deviation of $1.45.

The calendar spread is also extreme, settling at $58.06 between the June and the May. Meaning that if you had storage, you could get paid to take delivery, sell it forward, and lock in that $58.06 (net of what the storage costs you). I guess you could call that the megacontango.

All in all, a historically unprecedented day.

The proximate cause of these wild gyrations, and unprecedented negative prices is, of course, the collapse of demand and the looming exhaustion of storage space, including at the delivery point of Cushing, OK. But although this is a necessary condition for today’s events, it is not a full explanation.

The storage issue has been known for weeks, and discussed intensely. It had been priced in to a considerable degree: contango was already at a historically high level. What information about the availability of storage arrived between Friday and today? Unlikely to be anything that could cause such chaotic price movements.

The likely cause is the difficulty of liquidating about 100,000 open contracts (100 million barrels!) in such extreme technical conditions. It is plausible, and indeed likely, that strategic behavior–perhaps rising to the level of manipulation–is the major cause of how prices moved today against the background of conditions that were widely known on Friday.

Let me start out by noting that something similar, though not as extreme, occurred during the demand collapse and associated flooding of storage during the Financial Crisis. As I documented here, the expiries of the January, February, and March 2009 WTI contracts saw what were then historically unprecedented price collapses. So did other US grades of oil. Here’s a picture from the linked document:

The big downward spikes in the front month-back month spreads correspond with the days around expiry.

How does strategic behavior/market power/manipulation play into this? The model of short manipulation in my 1996 book (only $169 paperback–buy two!) and 1993 J. of Business article formalizes the argument, but the intuition is fairly straightforward. Manipulation exploits frictions and bottlenecks. (My article/book refer to “frictional manipulations.”). There is now a huge friction/bottleneck in Cushing–constrained storage. This bottleneck makes the demand curve for crude at Cushing extremely inelastic, and means that the movement of even small excess quantities of oil into that location will cause prices to decline dramatically.

In these conditions, a trader, or a group of traders with modest-sized short positions can exercise market power by delivering even a small amount of oil over and above the quantity that should flow to Cushing. This drives down the price and allows the trader or traders to cover his (their) position(s) at artificially low prices.

In this situation, the storage bottleneck is the gasoline, the exercise of the market power is the match. With 100,000,000 barrels of open long positions needing to liquidate, given the storage constraint, the resulting conflagration can be epic.

This is, at this stage, a hypothesis. It is a possible explanation of the beyond extreme movements observed today. Under the circumstances, it is a very plausible explanation, and one that deserves scrutiny. And given the amount of money that changed hands today (~$6 billion on a mark-to-market basis) I’m sure that it will get it.

The only parallel I can think of is the onion market in 1955, when the movement of a couple of superfluous carloads of onions into Chicago, and delivery thereof against futures, caused the price to crash below the cost of the bags that they had to be delivered in. There was no demand for the onions (being perishable, and people eat only so many hot dogs), so many of the excess plants ended up getting dumped into the Chicago River. (Which, in 1955, probably improved the water quality.) (Another irony being that Chicago means “stinky onions” in the Miami-Illinois Indian language.)

In 1955, demand was inelastic because onions are perishable (i.e., they can’t be stored). In a way, the lack of storage space makes oil perishable. Even if that analogy isn’t perfect, the economics are the same: an economic constraint (the non-storability of, or the lack of storage space) leads to extremely inelastic demand that makes short market power manipulation possible.

Tomorrow is the last trading day for CLK20. Strap it up! It’s going to be a wild ride.

Print Friendly, PDF & Email

April 9, 2020

Bullshit Numbers

Filed under: CoronaCrisis,Economics,Politics,Regulation — cpirrong @ 3:00 pm

You are seeing a lot of covid-19 numbers thrown around. Virtually all of those numbers are bullshit.

The death rates are bullshit. In a given country, there is considerable subjectivity regarding how deaths are qualified. The Great Scarfini* (Dr. Deborah Birx) pretty much let that cat out of the bag when she acknowledged that not only are the decedents who test positive (regardless of other co-morbidities) declared as covid-19 deaths, but those who have some colorable connection to covid-19 (clinical presentation, exposure to someone who tested positive) are declared to be covid-19 deaths.

It is likely that hospitals and physicians–and politicians–have an incentive to attribute deaths to covid-19. These incentives can be financial (a hospital could get greater compensation from covid victim than someone dying of something else) or power (death numbers are being used to justify draconian restrictions).

Further, different countries use different methods to count deaths.

What we are really interested in is people who would not have died but for covid-19. The official death statistics do NOT do this. And the fact that virtually all of the dead are aged and/or have multiple serious health problems, a but for attribution is dubious even in the presence of a positive test.

The only rigorous way to estimate these but for deaths is excess deaths (i.e., deaths in excess of expected deaths, conditioning on time of year, demographics, etc.). And preferably excess deaths from respiratory illness (or at least excess deaths from non-accidental causes). This is a good template for the analysis. This also presents some good cross-country data, which shows that in Italy and Spain there is evidence of excess deaths. Elsewhere? Not so much. Of particular interest is Sweden, which has implemented mainly voluntary social distancing measures, to the hysterical response of those deeply invested in mandatory lockdowns.

Do this for a variety of jurisdictions (countries, states in the US) and you would have enough cross-sectional and time series variation to do some real analysis that could provide reasonable support for policy decisions..

The case numbers are bullshit, at least if you want to measure infection rates. As I’ve been saying for weeks, there are so many selection biases that the numbers tell you NOTHING about the prevalence of the virus in the population, either at a point in time or crucially over time. Indeed, the CDC guidelines could be titled “How to Produce a Wildly Biased Sample”:

This testing protocol could be justified on clinical and diagnostic grounds, but it is a disaster from the perspective of generating data that is useful in shaping policy.

Further, trends in positive test numbers is driven to a considerable degree by . . . a greater number of tests.

The graphs that you see depicting trends deaths or cases across countries over time are bullshit. They are bullshit because the inherit all the flaws of the data discussed above (exacerbated by the fundamentally different data reporting methods across countries), and they almost fail to adjust for population size or demographic characteristics.

Chinese numbers are obviously bullshit. No need to elaborate this point.

The models that are being used to drive (or at least justify) lockdowns are bullshit. Their predictions went from apocalyptic to well, a small fraction of apocalyptic. Sometimes between one day and the next. Models should be evaluated on predictive accuracy. The predictions of these models have proved to be excessively pessimistic, i.e., bullshit.

And don’t buy the line that the lockdowns reduced the death tolls. For one thing, many of the models’ predictions included the effects of social distancing–and still came out way too high. For another, many countries’ death and case rates (above caveats apply) peaked before the lockdowns could have had any effect.

I keep hearing the IHME model referred to as the “top model.” Who says? On what basis? Basically because somebody else said it. And oh, Bill Gates is somehow involved. So that claim is bullshit too.

Also be very suspicious that the modelers are very opaque. We don’t see their assumptions or their methods. Notoriously, the most influential modeling team (at least initially) that did more than any to spark the panic, has not released its modeling code.

At least the honest modelers admit that social isolation and shutting down the economy doesn’t change the integral under the curve (i.e., the total number of deaths) but merely the time pattern of those deaths. And some epidemiologists claim that extending the period of time before the burnout may result in a higher number of total deaths.

But even putting that possibility for a higher total toll aside, the argument is made that it is necessary to “flatten the curve” in order to reduce the burden on the healthcare system. Well, one thing the models vastly overpredicted hospitalization/ICU visits as well. And I have yet to see any evidence of systematic shortages of ICU beds/ventilators. Yes, there are hotspots. But that just means that we need to understand the hotspots–and the non-hotspots–better.

Along those lines, I can’t say the numbers on ICU utilization are bullshit–because the numbers are largely non-existent. Instead we’ve had anecdotal journalistic (i.e., “if it bleeds it leads”) accounts that provide no objective quantitative standard by which to evaluate how binding the constraints are in the healthcare system.

But again the issue is cost-benefit. Basically what lockdowns do is discount future deaths/cases relative to present deaths/cases (since they accept an approximately equal number of future deaths for each death that does not occur today). And the discount rate is huge. We are losing trillions of dollars in lost output/income to push some deaths into the future. The interest rate is astronomical. Put differently, we are paying an immense price to kick the can down the road.

I understand the the supply of ICU beds, ventilators, physicians and nurses is pretty inelastic over the short run. But even given pretty substantial inelasticity, it would be far more efficient to throw billions at expanding capacity in the short run than to sacrifice ~25 percent of world income to reallocate the deaths over time. Capacity is not a fate. It is a choice.

And the fact that well into the crisis the foretold capacity disaster in hospitals has not been realized, the additional capacity required may well be quite small.

There is also the issue of how much the temporal pattern of deaths will really change. This depends on a variety of factors, including when the virus first spread and its virulence. The more we learn, the more likely it is that the virus has been spreading since late-fall/early-winter 2020. Which means that the lockdowns are reactive, not proactive, and that they have little impact even on the time pattern of deaths let alone the number: they are the proverbial locking the barn door after the horse done bolted.

In brief: our betters are destroying futures based on bullshit data. It’s as simple as that. And they are vastly increasing their power as a result, so they are destroying freedoms too.

In an earlier post I said that we have to grasp the nettle and decide what price are we willing to pay to save a life (usually of an aged, ill person). But it’s actually worse than that. It is very likely that the real question is: how much are we willing to pay to defer a death (of such a person) a few months? The cost that those who govern (or rule) us (and those who support them) are apparently willing to pay is astronomical.

*Dr. Birx is always adorned with a scarf. On my first trip to NYC in 1978, when NYC was near its nadir, I saw an obviously psychotic individual dancing on near Grand Central Station waving around a long scarf. Every once and a while he would shout “I AM THE GREAT SCARFINI” and then start dancing again.

Print Friendly, PDF & Email

March 31, 2020

We Need Data on the Virus, and the USS Roosevelt Is an Invaluable Source of It

Filed under: China,CoronaCrisis,Military,Politics,Regulation — cpirrong @ 1:38 pm

There is an ongoing outbreak of Covid-19 on the Nimitz class carrier, USS Roosevelt. The outbreak is severe, and today the CO, Capt. Brett Crozier, wrote an impassioned letter requesting onshore quarantine of the entire crew.

The first criticism that Captain Crozier raises is “Inappropriate focus on testing.” Crozier objects that tests provide little information: given the close proximity of those on board, they have presumptively been exposed, and should be isolated. Further, Crozier quantifies a relatively high rate of false negatives.

The captain is certainly correct regarding what is his primary responsibility–his ship and crew. But testing on the Roosevelt could provide invaluable information that could lead to far better policies in the United States, and the world at large. From a larger perspective, the opportunity for testing on the Roosevelt is something that cannot be allowed to slip away.

As I have noted repeatedly here, and on Twitter, policy is currently based on incredibly flawed data. In fact, the most useful piece of data is from a cruise ship Diamond Princess. The Roosevelt could provide a far bigger sample, and one that contains valuable information about the impact on non-elderly, relatively healthy individuals.

Even one of the things that Captain Crozier objects to–the presence of false negatives–is important. Quantifying that rate can provide information that greatly improves the inferences that can be drawn from other samples (apropos my earlier Bayes Rule post).

I understand that there are myriad competing considerations here. The health of the crew. The operational readiness of one of the most important combatants in the US Navy. Operational safety–e.g., who is going to operate the reactors and ensure that other systems are maintained properly even if the ship is not deployed? (You don’t leave a CVN parked in the driveway for a few weeks.)

Among those competing considerations, from Captain Crozier’s perspective, testing is indeed a near irrelevance. But it is extremely relevant for informing how we deal with the crisis around the world. The social value of this data is great indeed. I hope that those in the Pentagon, and in the administration, find a way to address Captain Crozier’s concerns while at the same time seizing on this opportunity to generate data that could save thousands of lives and trillions of dollars.

Coda: Another benefit here is that by the nature of the military, there would be excellent data at hand on virtually any interesting covariate you can think of–age, health conditions, socioeconomic background, etc. Combining BUPERS data with testing and clinical data from CVN-71 could provide a plethora of actionable insights.

Print Friendly, PDF & Email

March 24, 2020

It Really Does Pain Me to Say I Told You So About Clearing, But . . .

In the aftermath of the last crisis, I played the role of Clearing Cassandra, warning that in the next crisis, supersizing of derivatives clearing would create systemic risks not because clearinghouses would fail, but because of the consequences of what they would do to survive: hike initial margins and collect huge variation margin payments that would suck liquidity out of the system at the same time liquidity supply contracted. This, in turn, would lead to asset fire sales, that would distort asset prices which would lead to further knock-on effects.

I wrote a lot about this 2008-2012, but here is a convenient link. Key quote from the abstract:

The author also believes that the larger collateral mandates and frequent marking‐to‐market will make the financial system more vulnerable since margin requirements tend to be “pro‐cyclical.” And more rigid collateralization mechanisms can restrict the supply of funding liquidity, and lead to spikes in funding liquidity demand that can reduce the liquidity of traded instruments and generate destabilizing feedback loops. 

Well, the next crisis is here, and these (conditional) predictions are being borne out. In spades.

Here’s what I wrote a few days ago as a contribution to the Regulatory Fundamentals Group newsletter:

In the aftermath of the last crisis of 2008-2009, G20 nations decided to mandate clearing of standardized OTC derivatives transactions.  The current coronavirus crisis is the first since those reforms were implemented (via Dodd-Frank in the US, for example), and this therefore gives the first opportunity to evaluate the performance of the supersized clearing ecosystem in “wartime” conditions.  


So far, despite the extreme price movements across the entire derivatives universe–equities, fixed income, currencies, and commodities (especially oil)–there have been no indications that clearinghouses have faced either financial or operational disruption.  No clearing members have defaulted, and as of now, there have been no serious concerns than any are on the verge of default. 

That said, there are two major reasons for concern.


First, the unprecedented volatility and uncertainty show no signs of dissipating, and as long as it continues, major financial institutions–including clearing firms–are at risk.  The present crisis did not originate in the banking/shadow banking sector (as the previous one did), but it is now demonstrably affecting it.  There are strong indicators of stress in the financial system, such as the blowouts in FRA-OIS spreads and dollar swap rates (both harbingers of the last crisis).  Central banks have intervened aggressively, but these worrying signs have eased only slightly.  

Second, as I wrote repeatedly during the debate over clearing mandates in the post-2008 crisis period, the most insidious systemic risk that supersized clearing creates is not the potential for the failure of a clearinghouse (triggered by the failure of one or more clearing members).  Instead, the biggest clearing-related systemic risk is that the very measures that clearinghouses take to ensure their integrity–specifically, frequent variation margining/marking-to-market–lead to large increases in the demand for liquidity precisely during circumstances when liquidity is evaporating.  Margin payments during the past several weeks have hit unprecedented–and indeed, previously unimaginable–levels.  The need to fund these payments has inevitably increased the demand for liquidity, and contributed to the extraordinary demand for liquidity and the concomitant indicators stressed liquidity conditions (e.g., the spreads and extraordinary central bank actions mentioned earlier).  It is impossible to quantify this impact at present, but it is plausibly large.  

In sum, the post-2008 Crisis clearing system is operating as designed during the 2020 Crisis, but it is unclear whether that is a feature, or a bug.  

It is becoming more clear: Bug, and the bugs are breeding. There have been multiple stories over the last couple of days of margin calls on hedging positions causing fire sales, with attendant price dislocations in markets like for mortgages. Like here, here, and here. I guarantee there are more than have been reported, and there will be still more. Indeed, I bet if you look at any pricing anomaly, it has been created by, or exacerbated by, margin calls. (Look at the muni market, for instance.)

But those in charge still don’t get it. CFTC chairman Heath Tarbert delivers happy talk in the WSJ, claiming that everything is hunky dory because all them margins bein’ paid! and as a result, derivatives markets are functioning, CCPs aren’t failing, etc.

This is exactly the kind of non-systemic thinking about systemic risk that I railed about a decade ago. Mr. Tarbert has a siloed view: he is assigned some authority over a subset of the financial system, sees that it is working fine, and concludes that rules regarding that subset are beneficial for the system as a whole.

Wrong. Wrong. Wrong. Wrong. WRONG.

You have to look at the system as a whole, and how the pieces of the system interact.

In the post-last-crisis period I wrote about the “Levee Effect”, namely, that measures designed to protect one part of the financial system would flood others, with ambiguous (at best) systemic consequences. The cascading margins and the effects of those margin calls are exactly what I warned about (to the accompaniment a collective shrug by those who mattered, which is why we are where we are).

What we are seeing is unintended consequences–unintended, but not unforeseeable.

Speaking of unintended consequences, perhaps one good effect of September’s repo market seizure was that it awoke the Fed to its actual job–providing liquidity in times of stress. The facilities put in place in the aftermath of the September SNAFU are being expanded–by orders of magnitude–to deal with the current spike in liquidity demand (including the part of the spike due to margin issues). Thank God the Fed didn’t have to think this up on the fly.

It also appears that either (a) the restrictions on the Fed imposed by Frankendodd are not operative now, or (b) the Fed is saying IDGAF so sue me and blowing through them. Either way, such liquidity seizure are what the Fed was created to address.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress