Streetwise Professor

October 5, 2019

The Repo Spike: The Money Trust Revisited?

Filed under: Economics,Financial crisis,Regulation — cpirrong @ 6:42 pm

In the ongoing evaluation of what has been happening in the repo market, market participants have identified post-crisis regulations as a potential source of the problem. In particular, these regulations (including the Liquidity Coverage Ratio) require behemoth banks like JP Morgan and Citi to hold large amounts of reserves, and makes them reluctant to lend them out even when repo rates spike.

Having long said that the various liquidity regulations intended to prevent a recurrence of the last crisis could be the cause of a new one, I am certainly quite sympathetic to this view. However, information that is coming out now suggests another potentially complementary and aggravating factor.

In particular, reserve holdings are very concentrated:

Fed data show large banks are keeping a disproportionate amount in reserves, relative to their assets. The 25 largest US banks held an average of 8 per cent of their total assets in reserves at the end of the second quarter, versus 6 per cent for all other banks. 

Meanwhile, the four largest US banks — JPMorgan Chase, Bank of America, Citigroup and Wells Fargo — together held $377bn in cash reserves at the end of the second quarter this year, far more than the remaining 21 banks in the top 25

Moreover, the big banks have been reducing their reserves:

Analysts and bank rivals said big changes JPMorgan made in its balance sheet played a role in the spike in the repo market, which is an important adjunct to the Fed Funds market and used by the Fed to influence interest rates.
Without reliable sources of loans through the repo market, the financial system risks losing a valuable source of liquidity. Hedge funds, for example, use it to finance investments in U.S. Treasury securities and banks turn to it as option for raising suddenly-needed cash for clients.
Publicly-filed data shows JPMorgan reduced the cash it has on deposit at the Federal Reserve, from which it might have lent, by $158 billion in the year through June, a 57% decline.

Although JPMorgan’s moves appear to have been logical responses to interest rate trends and post-crisis banking regulations, which have limited it more than other banks, the data shows its switch accounted for about a third of the drop in all banking reserves at the Fed during the period.
“It was a very big move,” said one person who watches bank positions at the Fed but did not want to be named. An executive at a competing bank called the shift “massive”.
Other banks brought down their cash, too, but by only half the percentage, on average.
For example, Bank of America Corp (BAC.N), the second-biggest U.S. bank by assets, with a $2.4 trillion balance sheet, took down 30% of its deposits, a $29 billion reduction.

So . . . substantial concentrations of reserves, and declining levels of reserves. Yes, these are all potential consequences of Frankendodd. But they also are potentially symptomatic of market power and the exercise thereof.

This triggered a synapse, which led me to recall a 1993 article from the Journal of Monetary Economics by R. Glen Donaldson. Donaldson’s article was motivated by a study of the Panic of 1907, when a “cash syndicate” (led by . . . J.P. Morgan, in person and through his eponymous bank) lent to cash strapped trust companies facing depositor runs at very high rates.

Donaldson presents a model in which a spike in the need for cash by a set of market participants (trust companies facing depositor outflow, in his model) makes the funds held by a group of other institutions pivotal: these institutions face a downward sloping demand curve for their funds because of constraints on competitive suppliers of funds. The pivotal institutions supply funds (through a repo-like transaction in which they buy securities from the trusts) at a supercompetitive price (by buying the trusts’ securities at subcompetitive prices). In his model, collusion between the pivotal institutions exacerbates the rate spike.

The main implication of the model is that spikes in the demand for funds lead to spikes in interest rates that are bigger than would prevail in competitive conditions.

There is an element of non-linearity in the model, because the big suppliers’ funds are not pivotal in normal conditions, but become so when the demand becomes sufficiently large. This leads to a switch from competitive to monopoly pricing, which in turn causes a spike in rates.

I should note that the regulatory and market power stories are not mutually exclusive, and are indeed complementary. Regulatory constraints can increase the demand for funds (making it more likely that the big suppliers will be pivotal) and can reduce the supply of funds from the smaller suppliers (which lowers the threshold for the switch from competitive to monopoly pricing, and makes the demand curves for the big suppliers funds steeper, leading to a higher monopoly rate).

I therefore consider it a plausible hypothesis that market power contributed to the repo market spike, and that one channel by which regulations contributed to the spike was through its effect on market power.

How can this hypothesis be tested? Conceptually, if regulatory constraints alone caused the spike, then those in possession of large quantities of reserves (e.g., Morgan) were absolutely constrained in their ability to lend additional reserves: the difference between the repo rate and the Fed Funds rate would represent the shadow price on this regulatory constraint.

If a big bank or banks exercised market power, this constraint would not be binding.

Operationalizing this test is likely to be complex, however. Big holders of reserves will inevitably make all sorts of arguments to say that they couldn’t have lent more.

This brings to mind the California electricity crisis in 1999-2000, when generators operated below various capacity measures, but pleaded that constraints (by unplanned outages, or NOX regulations, etc.) reduced their effective capacity below these nominal capacity measures. Given the complexity of operating a power plant, it was very difficult to determine whether the generators were withholding capacity, or in fact offered as much as they were capable of doing.

Despite the difficulty of operationalizing the test, I think it is something for regulators to attempt. There is a colorable case that the repo rate rise was exacerbated by market power, and given the importance of this market, this possibility should be investigated rigorously.

As an aside, the Donaldson model appeared only a few months before my Journal of Business article on market power manipulation. The two articles have a lot in common, despite the fact that they were developed totally independently, and seemingly involve completely unrelated markets (money vs. physical commodities). However, the core arguments are similar: economic frictions can periodically create market power in markets that are usually competitive.

Print Friendly, PDF & Email

October 3, 2019

Matt Stoller Turns Questions of Fact on the Contributions of Aaron Director into Questions of Motive, For Which the Stigler Center Should Be Ashamed

Filed under: Economics,Politics,Regulation — cpirrong @ 9:34 am

Hannah Arendt once wrote that “one of the greatest advantages of the totalitarian elite in the twenties and thirties was to turn any statement of fact into a question of motive.” This quote came to mind when reading Matt Stoller’s hit piece on Aaron Director on the Stigler Center’s Pro-Market blog.

Director was one of the major moving forces behind the Chicago revolution in antitrust scholarship in the 1950s and 1960s. Although he published little himself, through his teaching, and his interactions with other faculty in economics, law, and business at Chicago, Director challenged the consensus on antitrust, especially in areas like vertical restraints. The challenge that he inspired pretty much overturned this consensus, and it is fair to say that the Chicago became the replacement paradigm. A good portion of the industrial organization and antitrust scholarship of the past 50 years has been aimed at challenging the Chicago view, but nonetheless, many of its key insights remain regnant.

Stoller does not mount a serious attempt to critique Director’s actual contributions, or to explain them, as Sam Peltzman does in his two posts on Director. Rather than challenging Director and his followers on the facts, or on the analysis, Stoller instead questions Director’s motives. It is attack by ad hominem.

In Stoller’s telling, for much of his life, Director was a good progressive, and a devotee of Henry Simons. As such, he was an antimonopolist who favored aggressive antitrust enforcement. But then Simons died, and Director “suddenly” converted to a right-wing pro-business fanatic in order to appease a major funder who according to Stoller was an “extreme right-wing[er]” and quasi-fascist:

Director suddenly decided that conservative ideas were compatible with corporatism after all. Monopolies, apparently, were always created by government. At this moment, Director broke with the conservative tradition and birthed neoliberalism, the anti-government, pro-monopoly philosophy that now dominates policymaking globally. Director convinced George Stigler and Milton Friedman of the new creed. Both had opposed corporate monopolies, but flipped to support Director’s new movement. The Chicago School was born.

Thus, Director was nothing but an intellectual Judas, who sold out his firm convictions for a few pieces of silver.

This begs so many questions it isn’t funny. Take Stoller’s premise as fact. How, pray tell, did Director convince such notoriously strong minded people like Stigler and Friedman? Did he pay them off? No really–how did he persuade them?

And how did he persuade others, such as Bork, who was a major force in reshaping antitrust law? And how did the Chicago school antitrust/industrial organization ideas midwifed by Director have such a profound effect on the economics and legal academy outside of Hyde Park, and then the courts? Especially since they were initially so contrary to the professional consensus, and indeed attracted substantial (and often hysterical) opposition?

There must have been something to the ideas, eh? But not in Stoller’s telling. Instead, according to him, Harold Luhnow got his money’s worth by getting Director to turn from anti-monopolist to pro-monopolist, and somehow (mesmerism?) Director convinced myriad intellectuals (and judges) to go along.

The closest that Stoller comes to addressing any of the scholarship that Director inspired is a drive by shooting on John McGee’s Journal of Law and Economics (1958) paper that contended that, contrary to the overwhelming conventional wisdom, Rockefeller’s Standard Oil did NOT engage in predatory pricing.

Stoller refers to a paper which disputes McGee’s findings. Fine. But he is presented with the problem that, as shown by Joshua Wright, McGee’s article had far less of an impact on academic and legal thinking on predatory pricing than the 1975 Areeda-Turner article. But no problem! Just turn this question of fact into one of motive: “It didn’t hurt [Areeda-Turner’s] motivations, of course, that they were both on the payroll of IBM, which was at that moment in a bitter series of antitrust lawsuits which included, you guessed it, predatory pricing claims.”


Stoller writes: “With support on the right and the left, courts soon accepted Director’s ideas, laundered through McGee, Turner, and Areeda.” Again: through what powers of mind control did Director get “liberal Democrats” from Harvard to launder his dirty ideas? Inquiring minds want to know!

But Stoller’s distortion of history doesn’t end here. His explanation for the rationality of predatory pricing goes like this:

Contra Director’s logic, predatory pricing is quite rational. A competitor to a corporate goliath can’t borrow an infinite amount of money to lose until prices come back, nor can a competitor just shut down until prices go back up. No bank would lend to a competitor of Standard Oil, just as no one today will lend to a retailer competing to lose money against Amazon. 

Wow. That logic sounds familiar! Yes, I remember now: in 1966 one of my thesis advisors, Lester Telser (a contemporary of McGee’s in the PhD program at Chicago), published an article titled “Cutthroat Competition and the Long Purse” which explored that very same logic.

It gets better.

Lester’s article was published in what Stoller portrays as the main vehicle for Director’s malign influence: the Journal of Law and Economics. Better yet, Telser thanks Director for his input. Better yet: Telser’s article was published in an issue honoring Director, on the occasion of his retirement from Chicago and editorship of the JLE.

Of course, you would never know this, if you read Stoller. Stoller also fails to mention that Sam Peltzman told an anecdote regarding Director and predatory pricing in a JLE article on “Aaron Director’s Influence on Antitrust Policy,” published as a sort of obituary at the time of Director’s death in 2004. In Sam’s telling, Director, in his typical Socratic style, led his students through an analysis of predatory pricing . . . in which he concluded that the defendant in a predatory pricing case “most likely was guilty as charged.”

It is particularly astounding that Stoller should overlook this anecdote, given that it was republished on the very same Pro-Market website. So it’s not like Stoller had to, I dunno, get onto JStor and do some real research on whom he was supposedly analyzing.

Stoller also fails to acknowledge that Director’s alleged ability to mesmerize did not even extend to nearby offices at the University of Chicago Law School: Richard Posner, for example, acknowledged that predation could occur.

Stoller also evidently has no clue as to how academia works. Provocative articles like McGee’s inevitably spur others to challenge it. And indeed, there have been numerous articles over the years that identify conditions in which predation can work. As it turns out, however, the conditions are much more fragile than Stoller lets on.

In sum, Stoller’s post on Director is an appalling piece of work. It fails to join Director’s actual work, and relies on vicious ad hominem to discredit the work which he does not like by attempting to discredit the motives of the person.

I don’t really give a damn about Matt Stoller. What I do find especially disgusting is that the Stigler Center at the Booth School of Business would lend its imprimatur to a piece that violates the fundamental norms and ethics of scholarship. If Pro-Market wanted to provide a critical view of Director, the Chicago School of antitrust has a lot of serious critics who could analyze his work and the work that he inspired. Instead, Pro-Market provides a platform for an intellectually disreputable attack on alleged motives, and one that provides no substantial evidence for its central claim, and which begs so many questions as to be self-refuting.

Appalling, and an affront to the long tradition of economics and law at Chicago.

Print Friendly, PDF & Email

September 20, 2019

The Simple (and Very Old) Economics of the Stock Market Data Pricing Controversy

Filed under: Economics,Exchanges,Regulation — cpirrong @ 4:20 pm

The most contentious battle in American securities markets right now is being waged over exchange pricing of data, in particular over proprietary order book feeds. The battle pits the exchanges against market users (e.g., HFT firms, institutional traders) with the latter claiming that the prices charged by the former border on the extortionate.

The critics actually have a very good point. The economics of the situation imply that the prices the competing exchanges charge are ABOVE the price a monopoly would charge.

No. Really.

So how could competing firms charge a supra-monopoly, let alone supra-competitive, price? The answer to this question is something pointed out by the first true mathematical economist, Augustin Cournot, in his Principes Mathematique, published in 1838. In that book, Cournot laid out “the problem of complements.” Cournot showed that imperfectly competitive firms overprice complementary goods. (Cournot’s example involved zinc and copper in the production of brass. They are complements used in fixed proportion.)

The basic issue is that when goods are complements, if firm A raises its price firm B that produces a complement to A’s good cannot steal sales from A by cutting its price (as would be the case of A’s and B’s goods were substitutes). This reduces the incentive to cut price, and actually provides an incentive to increase prices in order to get a bigger piece of the surplus that is generated when the consumer buys both goods.

This situation fits the stock market case perfectly. Execution services on US exchanges (e.g., NYSE, BATS) are substitutes, but data services are complements.

Consider an HFT firm. One source of profits for this firm is to exploit price discrepancies across exchanges. This requires having near immediate and simultaneous access to prices across all exchanges. Or consider a buyside firm that is trying to minimize execution costs by a clever order routing strategy. Optimizing the allocation of orders across exchanges requires knowing the order book on all the exchanges.

In other words, there are many market participants who have to collect the entire set (of exchange data). This makes the data provided by competing exchanges complements, which by the Cournot logic, forces prices above the competitive level, and indeed, above the monopoly level.

Furthermore, the problem becomes worse, the larger number of exchanges. This is a situation in which lower concentration leads to less competitive outcomes. (Robin Hansen made a similar point recently.)

This is yet another example of the only law that is never repealed: the law of unintended consequences. The intent of RegNMS was to increase competition in the execution of stock trades, and it has done a marvelous job of that. However, the unintended effect of this “fragmentation” (i.e., the increase in the number of execution venues and decline in concentration across exchanges) has been to create and exacerbate a complements problem in data.

A couple of final points. Perhaps one could make a second-best argument here: low execution fees and high data fees may be a good way of covering the fixed costs of operating exchanges (a la Ramsey pricing). Perhaps, but unproven.

What is the right regulatory response? Not clear. I addressed similar conundrums in my 2002 Market Macrostructure article. Natural monopoly-style/pricing regulation could mitigate the overpricing problem, but entails its own costs (e.g., undermining incentives to innovate). The issue is particularly challenging here because efficiency-enhancing competition on one dimension (execution) leads to inefficient problems-of-complements competition on another (data).

As I argued in Market Macrostructure, it really comes down to an issue of property rights. Should exchanges have exclusive ownership of their data? Should this ownership be attenuated in some way, such as limitation on prices, or a required pooling of data that would be sold by a monopolist, with revenues shared by the exchanges? Here is a case where a monopoly would actually improve outcomes.

Maybe that is the way to split the baby, politically. Exchanges would get rents, but efficiency would be improved. Not a first-best solution, but maybe a second best one, and one that could represent a Coasean bargain between exchanges and their customers. And perhaps the regulator–the SEC–could help facilitate and coordinate that deal.

Print Friendly, PDF & Email

Back to the Fed Future, or You Had One Job

In the Gilded Age, American financial crises (“panics,” in the lexicon of the day) tended to occur in the fall. Agriculture played a predominant role in the economy, and marketing of the new crop in the fall led to a spike in the demand for cash and credit. In that era, however, the supply of cash and credit was not particularly elastic, and these demand spikes sometimes turned into panics when supply did not (or could not) respond accordingly.

The entire point of the Fed, which was created in the aftermath of one of these fall panics (the Panic of 1907, which occurred in October), was to make currency supply more elastic and thereby reduce the potential for panics. In essence, the Fed had one job: lender of last resort to ensure a match of supply and demand for currency/credit, when the latter was quite volatile.

This week’s repospasm is redolent of those bygone days. Now, the spikes in demand for liquidity are not driven by the crop cycle, but by the tax and corporate reporting cycles. But they recur, and several have occurred in the autumn, or on the cusp thereof (this being the last week of summer).

One of my mantras in teaching about commodities is that spreads price bottlenecks. Bottlenecks can occur in the financial markets too. The periodic spikes in repo rates–not just this week, but in December, and March–relative to other short term rates scream “bottleneck.” Many candidates have been offered, but regardless of the ultimate source of the clog in the plumbing, the evidence from the repo market is that there are indeed clogs, and they recur periodically.

The Fed’s rather belated and stumbling response suggests that it is not fully prepared to respond to these bottlenecks, despite the fact that their regularity suggests that the clogs are chronic. As the saying goes, “you had one job . . . ” and the Fed fell down on this one.

And maybe the problem is that the Fed no longer just has one job, and it has shunted the job that was the reason for its creation to the back of the priority list. Nowadays, the Fed has statutory obligations to control employment and inflation, and views its main job as managing aggregate demand, rather than tending to the financial system’s plumbing.

This is concerning, as dislocations in short-term funding markets can destabilize the system. These markets are systemically important, and failure to ensure their smooth operation can result in crises–panics–that undermine the ability of the Fed to perform its prioritized macroeconomic management task.

One of the salutary developments post-crisis has been the reduced reliance of banks and investment banks on flighty short-term funding. The repo markets are far smaller than they were pre-2008, and the unsecured interbank market has all but disappeared (representing only about .3 percent of bank assets, as compared to around 6 percent in 2006). But this is not to say that these markets are unimportant, or that bottlenecks in these markets cannot have systemic consequences. For the want of a nail . . . .

Moreover, the post-crisis restructuring of the financial system and financial regulation has created new potential sources of liquidity shocks, namely a supersizing of potential demands for liquidity to pay variation margin. When you have a market shock (e.g., the oil price shock) occurring simultaneously with the other sources of increased demand for liquidity, the bottlenecks can have very perverse consequences. We should be thankful that the shock wasn’t a Big One, like October, 1987.

Hopefully this week’s tumult will rejuvenate the Fed’s focus on mitigating bottlenecks in funding markets. Maybe the Fed doesn’t have just one job now, but this is an important job and is one that it should be able to do in a fairly routine fashion. After all, that job is what it was created to perform. So perform it.

Print Friendly, PDF & Email

September 17, 2019

Funding Market Tremors: Today May Not Have Been “The Big One,” But It Was Bad Enough

The primary reason for my deep skepticism about the wisdom of clearing mandates was liquidity risk. As I said repeatedly, in order to reduce counterparty risk, clearing necessarily increased liquidity risk through the variation margining mechanism. Further, it was–and is–my opinion that liquidity risk is a far graver systemic concern that counterparty risk.

A major liquidity event has occurred in the last couple of days: rates in the repurchase market–the major source of short term funding for vast amounts of trading activity–shot up to levels (around 5 percent) nearly double the Fed’s target ceiling for that rate. Some trades took place at far higher rates than that (e.g., 9.25 percent).

Market participants have advanced several explanations, including big cash demands due to corporate tax payments coming due. Izabella Kaminska at FTAlphavile offered this provocative alternative, which resonates with my clearing story: the large price movements in oil and fixed income markets in the aftermath of the attack on the Saudi resulted in large margin calls in futures and cleared OTC markets that increased stresses on the funding markets.

To which one might say: I sure as hell hope that’s not it, because although there was a lot of price action yesterday, it wasn’t The Big One. (The fact that Fred Sanford’s palpitations occurred because he couldn’t get his hands on cash makes that bit particularly apropos!)

I did some quick back-of-the-envelope calculations. WTI and Brent variation margin flows (futures and options) were on the order of $35 billion. Treasuries on CME maybe $10 billion. S&P futures, about $1 billion. About $2 billion on Eurodollar futures.

The Eurodollar numbers can help give a rough idea of margin flows on cleared interest rate swaps. Eurodollar futures open interest is about $12 trillion. Cleared OTC notional volume (not just USD, but all IRS) is around $80 trillion. But $1mm in notional of a 5 year swap is equivalent to 20 Eurodollar futures with notional amount of $20 trillion. So, as a rough estimate, variation margin flows in the cleared IRS market are on the order of 100x for Eurodollars. That represents a non-trivial $200 billion.

Yes, there are potentials for offsets, so these numbers are not additive. For example, a firm might have offsetting positions in EDF and cleared IRS. Or be short oil and long Treasuries. But variation margin flows on the order of $300 billion are not unrealistic. And since market moves were relatively large yesterday, that represents an increment over the typical day.

So we are talking real money, which could certainly contribute to an increased demand for liquidity. But again, yesterday was not remotely a truly epic day that one could readily imagine happening.

A couple of points deserve emphasis. The first is that perhaps it was coincidence or bad luck, but the big variation margin flows coincided with other sources of increased demand for liquidity. But hey, stuff happens, and sometimes stuff happens all at once. The system has to be able to withstand such simultaneous stuff.

The second is related, and very concerning. The spikes in rates observed periodically in the repo market (not just here, but notoriously in China) suggest that this market can go non-linear. Thus, even if the increased funding needs caused by the post Abqaiq fallout wasn’t The Big One, in a non-linear market, even modest increases in funding needs can have huge impacts on funding costs.

This highlights another concern: inter-market feedback. A shock in one market (e.g., crude) puts stress on the funding market that leads to spikes in repo rates. But these spikes can feedback into prices in other markets. For example, if the inability to fund positions causes fire sales that cause big price moves that cause big variation margin flows which put further stress on the funding markets.

Yeah. This is what I was talking about.

Today’s events nicely illustrate another concern I raised years ago. Clearing/margining make markets more tightly coupled: the need to meet margin calls within hours increases the potential stress on the funding markets. As I tell my classes, unlike in the pre-Frankendodd days, there is no “fuck you” option when your counterparty calls for margin. You don’t pay, you are in default.

This tight coupling makes the market more vulnerable to operational failings. On Black Monday, 1987, for example, the FedWire went down a couple of times and this contributed to the chaos and the potential for catastrophic failure.

And guess what? There was a (Fed-related!) operational problem today. The NY Fed announced that it would hold a repo operation to supply $75 billion of liquidity . . . then had to cancel it due to “technical difficulties.”

I hate it when that happens! But that’s exactly the point: It happens. And the corollary is: when it happens, it happens at the worst time.

The WSJ article also contains other sobering information. Specifically, post-crisis regulatory “reforms” have made the funding markets more rigid/less-flexible and supple. This would tend to exacerbate non-linearities in the market.

We’re from the government and we’re here to help you! The law of unintended (but predictable) consequences strikes again.

Hopefully things will normalize quickly. But the events of the last two days should be a serious wake-up call. The funding markets going non-linear is the biggest systemic risk. By far. And to the extent that regulatory changes–such as mandated clearing–have increased the potential for demand surges in those markets, and have reduced the ability of those markets to respond to those surges, in their attempt to reduce systemic risks, they have increased them.

I have often been asked what would cause the next financial crisis. My answer has always been: the regulations intended to prevent a recurrence of the last one. Today may be a case in point.

Print Friendly, PDF & Email

September 14, 2019

Bakkt in the (Crypto) Saddle

ICE is on the verge of launching Bitcoin futures. The official start date is 23 September.

The ICE contract is distinctive in a couple of ways.

First, it is a delivery settled contract. Indeed, this feature is what made the ICE product so long in coming. The exchange had to set up a depository, the Bakkt Warehouse. This required careful infrastructure design and jumping through regulatory hoops to establish the Bakkt Trust Company, and get approval from the NY Department of Financial Services.

Second, the structure of the contracts offered is similar to that of the London Metal Exchange. There are daily contracts extending 70 days into the future, as well as more conventional monthly contracts. (LME offers daily contracts going out three months, then 3-, 15-, and 27-month contracts). The daily contracts settle two days after expiration, again similar to LME.

The whole initiative is quite fascinating, as it represents a dual competitive strategy: Bakkt is simultaneously competing in the futures space (against CME in particular), and against spot crypto exchanges.

What are its prospects? I would have to say that Bakkt is a better mousetrap.

It certainly offers many advantages as a spot platform over the plethora of existing Bitcoin/crypto exchanges. These advantages include ICE’s reputation, the creation of a warehouse with substantial capital backing, and regulatory protections. Here is a case in which regulation can be a feature, not a bug.

Furthermore, for decades–over a quarter-century, in fact–I have argued that physical delivery is a far superior mechanism for price discovery and ensuring convergence than cash settlement. The myriad issues that were uncovered in natural gas when rocks were overturned in the post-Enron era, the chronic controversies over Platts windows, and the IBORs have demonstrated the frailty, and vulnerability to manipulation of cash settlement mechanisms.

Crypto is somewhat different–or at least, has the potential to be–because the CME’s cash settlement mechanism is based off prices determined on several BTC exchanges, in much the same way as the S&P500 settlement mechanism is based on prices determined at centralized auction markets.

But the crypto exchanges are not the NYSE or Nasdaq. They are a rather dodgy lot, and there is some evidence of manipulation and inflated volumes on these exchanges.

It’s also something of a puzzle that so many crypto exchanges survive. The centripetal forces of liquidity tend to cause trading in a particular instrument to gravitate to a single platform. The fact that this hasn’t happened in crypto is anomalous, and suggests that normal economic forces are not operating in this market. This raises some concerns.

Bakkt potentially represents a double-barrel threat to CME. Not only is it competing in futures, if it attracts a considerable amount of spot trading activity (due to a superior trading, clearing, settlement and custodial platform, reputational capital, and regulatory safeguards) this will undermine the reliability of CME’s cash settlement mechanism by attracting volume away from the markets CME uses to determine final settlement prices. This could make these market prices less reliable, and more subject to manipulation. Indeed, some–and maybe all–of these exchanges could disappear if ICE’s cash market dominates. CME would be up a creek then.

That said, one of the lessons of inter-exchange competition is that the best mousetrap doesn’t always win. In particular, CME has already established liquidity in the futures market, and as even as formidable competitor as Eurex found out in Treasuries in the early-oughties, it is difficult to induce a shift of liquidity to a competitor.

There are differences between crypto and other more traditional financial products (cash and derivatives) that may make that liquidity-based first mover advantage less decisive. For one thing, as I noted earlier, heretofore cash crypto has proved an exception to the winner-takes-all rule. Maybe the same will hold true for crypto futures: since I don’t understand why cash has been an exception to the rule, I’d be reluctant to say that futures won’t be (although CBOE’s exit suggests it might). For another, the complementarity between cash and futures in this case (which ICE is cleverly exploiting in its LME-like contract structure) could prove decisive. If ICE can get traction in the fragmented cash market, that would bode well for its prospects in futures.

Entry into a derivatives or cash market in competition with an incumbent is always a highly leveraged bet. Odds are that you fail, but if you win it can prove enormously lucrative. That’s essentially the bet that ICE is taking in BTC.

The ICE/Bakkt initiative will prove to be a fascinating case study in inter-exchange competition. Crypto is sufficiently distinctive, and the double-barrel ICE initiative sufficiently innovative, that the traditional betting form (go with the incumbent) could well fail. I will watch with interest.

Print Friendly, PDF & Email

August 9, 2019

Damn That Parson Bayes and His Cursed Theorem: Red Flagging Red Flag Rules

Filed under: Guns,Politics,Regulation — cpirrong @ 3:20 pm

In the aftermath of the El Paso and Dayton shootings, “red flag” rules are all the rage. Identify people who are at high risk of committing such atrocities, and prevent them from buying weapons.

Most of the arguments in favor of this rely on statements like “many mass shooters have characteristic X (e.g., mental illness), so let’s prevent those with characteristic X from buying guns.” As appealing as these arguments sound, they founder due to a failure to understand fundamental probability concepts which imply that for extremely rare events like mass shootings, red flags are extremely unreliable.

Most of the arguments in favor of red flags rely on estimates of P(X|M), i.e., the probability that someone who committed a mass murder (“M“) had characteristic X. For example, “70 percent of mass shooters present evidence of mental illness.” Or Y percent play violent video games or post racist rants online.

But what we really need to know in order to implement red flags that do not stigmatize, and deny the rights of, people who present a low risk of committing a mass shooting is P(M|X): “what is the probability that someone with characteristic X will commit a mass shooting?” Although most people argue as if P(X|M) and P(M|X) are interchangeable, they are not, as Thomas Bayes demonstrated in the 18th century when he demonstrated something now called Bayes’ Theorem.

As Bayes showed, P(M|X)=P(X|M)P(M)/P(X) where P(M) is the unconditional probability someone is a mass shooter, and P(X) is the unconditional probability that someone has characteristic X.

The problem with attempting to determine whether someone with X poses a risk is that mass shooters are extremely rare, and hence P(M) is extremely small.

USA Today estimated there were 270 odd mass shootings between 2005 and 2017. A Michael Bloomberg-funded anti-gun group counts 110. Given a population of around 300 million, even using the higher number a rough estimate of P(M) is 9e-7: a 9 with six zeros in front of it. Therefore, even if P(X|M)=1 (i.e., all mass shooters share some characteristic X) , for any characteristic X that occurs fairly frequently in the population P(M|X) is extremely small.

Consider a characteristic where there is fairly good data on on P(X): schizophrenia. It is estimated that 1 percent of the population is schizophrenic. Plugging .01 for P(X) gives a value of P(M|X) of 9e-5, or about 1 out of 10,000. Meaning that the likelihood a random schizophrenic will commit a mass shooting is .001 percent.

This actually overstates matters, because P(X|M)<1. Indeed, since mass shootings are in fact quite heterogeneous, P(X|M) is likely to be far less than one for most characteristics.

Things get even worse if one broadens the scope of the characteristic used to define the red flag. If instead of using schizophrenia, one uses serious mental illness, by some measures P(X)=.2. Well, if you increase the denominator by a factor of 20, P(M|X) falls by a factor of 20. So instead of a probability of .001 percent, the probability is .00005 percent.

And again, that is an exaggeration because it assumes P(X|M)=1.

Meaning that putting a red flag on schizophrenics or those who have experienced some mental illness will be vastly overinclusive.

Of course, life is a matter of trade-offs. One must weigh the costs imposed on those who are wrongly stigmatized (“false positives”) with the benefit of reducing mass shootings by imposing restrictions based on an overinclusive, but at least somewhat informative signal (i.e., a signal with P(X|M)>0).

For some there is no trade-off at all. For those primarily on the left who believe that guns are an anathema and have no benefit whatsoever, even a 99.99995 percent false positive rate is not at all costly. However, a very large number of Americans do think bearing arms is beneficial, these false positives come at a high cost.

That’s where the debate should really focus: the rate of false positives and the cost of those false positives vs. the benefits of true positives (which would represent mass shootings avoided). What Bayes’ Theorem implies is that for an act that someone is extremely unlikely to commit, that false positive rate is likely to be extremely high. It also implies that debating in terms of P(X|M) provides very little insight. P(M) is small, and for any fairly common characteristic, P(X) is fairly large, so P(X|M) has relatively little impact on the rate of false positives.

Again, what Bayes’ Theorem tells us is that for a rare event like mass shooting, vastly more innocent people than true risks will be red flagged. The costs of restricting those who pose no risk must be weighed against the benefits of reducing modestly the risk of a very rare event. Further, it must be recognized that implementing red flag rules are costly, and in these costs should be included the invasions of privacy that they inevitably entail. Yet further, red flag rules are certain to be abused by those with a grudge. And yet further, many of those with characteristic X will escape detection, or will be able to evade the legal restrictions (and indeed have a high motivation to do so).

In the aftermath of mass shootings, there is a hue and cry to do something. The hard lesson taught by Parson Bayes is that there is not a lot we can do. Or put more precisely, those things that we can do will inevitably stigmatize and restrict vastly more innocent people than constrain malign ones.

Print Friendly, PDF & Email

August 3, 2019

Renewables Are Expensive Because You Can’t Stick ‘Em Where the Sun Don’t Shine (or the Wind Don’t Blow)

Filed under: Climate Change,Economics,Energy,Politics,Regulation — cpirrong @ 4:45 pm

I’m sure you’ve read articles claiming that the cost of renewables electricity generation is approaching that (or even lower than) the cost of traditional thermal generation. I am deeply skeptical of these claims even when evaluated on their own terms (which focus on generation costs alone), but find them particularly misleading because they ignore other costs attributable to the facts that renewables are intermittent and diffuse, and that the siting of renewables generation is sharply constrained because they are energy limited resources; the distribution of energy is dictated by nature; and typically is not closely related to the distribution of load.

In other words, renewables are costly because you can’t stick them where the sun don’t shine (or the wind don’t blow).

Case in point: Australia. As even Bloomberg (a tiresome renewables fanzine) reports:

Australia’s financing of cleaner power is slowing because the country’s aging grid isn’t being upgraded quick enough to accept new, intermittent generation and transport it efficiently to demand centers.

Although Bloomberg attempts to blame an old, creaky transmission system, this is misleading in the extreme. It would be far cheaper to upgrade Australia’s transmission system to accommodate thermal generation than it will be to build transmission to increase the fraction of generation coming from renewables.

This is true for at least a couple of reasons.

First, the energy-limited nature of renewables means that you have to site them where the energy is available–sunny or windy places. This imposes a constraint on the location of generation resources that is not relevant for thermal generation. With traditional fossil-fueled generation, you have more flexibility in trading off transmission costs with generation costs (including the cost of brining fuel to plants) than is the case with wind. This flexibility means that all else (notably the spatial distribution of load) equal, transmission costs are lower with thermal generation than renewable power.

Second, the intermittent and inherently more volatile nature of renewables generation increases the variance in the spatial distribution of generation. This variability in the spatial distribution of generation necessarily requires more transmission capacity per unit of load. This, in turn implies a lower average rate of utilization of transmission resources.

The basic idea here can be illustrated relatively simply. Consider a system with two generation resources. One is highly volatile (e.g., a renewable resource). The other is controllable. There is one load location. The transmission capacity from the volatile location to load must be high enough to carry the power when output is high (because the energy input is high due to the vicissitudes of sun or wind). The transmission capacity from the location with controllable generation must also be high enough to transmit enough power to fill the gap left when the renewable output is low.

Note that when renewable output is high, controllable output will be low and the transmission lines from the latter will operate at low capacity. When renewable output is low, the lines serving it will be operating at low capacity.

It’s possible to expand the example to include multiple variable, energy limited, but imperfectly correlated renewables resources, but the outcome is the same. You need more transmission capacity to deal with the spatial volatility in generation, and given load, higher capacity translates into lower average capacity utilization.

Thus, the problem that Australia is confronting isn’t a function of an old grid: it arises from the fact that increased reliance on renewables requires investment in new transmission capacity even in a system where transmission is optimized relative to (thermal) generation and load.

The need to maintain relatively underutilized transmission capacity to deal with the inherent volatility of renewables generation is mirrored by the need to maintain underutilized thermal generation capacity:

While new clean energy projects struggle to gain access to a congested grid, aging coal and gas-fired generators are being kept running for longer to maintain system stability. AGL Energy Ltd. said Friday it would delay the planned closure of its Liddell and Torrens A plants, both around 50 years old, to help the national energy market cope with peak summer demand, which has seen blackouts in parts of southeastern Australia in recent years.

Who knew?

Yet the renewables industry/lobby continues to flog the dogma that they will inevitably be more efficient:

Despite the challenges facing the industry, it’s not all doom and gloom. A number of coal-fired plants will be retired over the next decade and they will only be replaced by the cheapest cost of energy, which is renewables, Clean Energy Finance Corp. Chief Executive Ian Learmonth said in an interview.
“I’m hoping once some of these issues around the grid and regulations are settled that we’ll see another significant uptick in the renewable energy pipeline,” he said.

What costs is Mr. Learmonth including in his assertion that renewables are the “cheapest” source of energy? His statement that settling “issues around the grid” will lead to increased renewables investment suggests that he is ignoring crucial costs, because settling these issues doesn’t come for free.

It’s not as if the transmission issue is unique to Australia. It is present in every locale that has force-fed renewables. Germany is a prominent example. Wind energy is abundant in the North Sea, but believe it or not, there aren’t a lot of electricity consumers there (despite my ardent wish that Merkel and her ilk get into the sea). Major sources of load are in central and southern Germany, so bringing North Sea wind power to load requires massive transmission investments, which inevitably are not just costly, but politically difficult (Der NIMBY, anyone?). These difficulties inflate the cost.

Renewables boosterism operates in an atmosphere of serious unreality because it consistently glosses over–or ignores altogether–the costs arising from intermittency, diffusiveness, the energy-limited nature of wind and solar, and the caprices of nature that cause a mismatch between where the energy exists and where it is needed. When these facts are considered, sticking renewables where the sun don’t shine makes perfect sense.

Print Friendly, PDF & Email

July 7, 2019

Spot Month Limits: Necessary, But Not Sufficient, to Prevent Market Power Manipulation

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — cpirrong @ 6:50 pm

In my recent post on position limits, I suggested that at most spot month limits are justified as a means of constraining market power manipulation. It is important to note, however, that setting spot month limits at levels that approximate stocks in deliverable position may not be sufficient to prevent the exercise of market power during the delivery period, with the resultant deleterious effects on prices.

The basic motivation for position limits equal to stocks is predicated on a model of manipulation that makes particular assumptions about market participants’ beliefs. I pointed out the importance of this assumption in my 1993 Journal of Business article on market power manipulation. In one model of that paper, I assume that market participants believe that a large long who takes delivery will resell what is delivered, and will not consume it. In the other model, market participants believe that the large long will consume (or otherwise withhold from the market) some fraction of what shorts deliver to him.

Under the first set of beliefs, it is indeed a necessary condition for profitable manipulation that a long’s position exceed inventories in deliverable position. It is this kind of manipulation that spot month limits pegged to inventories can prevent.

However, under the second set of beliefs, a large long with a position smaller than inventories in deliverable position can exercise market power and inflate prices. Spot month limits based on inventories cannot prevent this type of manipulation.

I recently completed a paper that incorporates this insight into a standard signalling model. In the model, there are two kinds of longs: (a) “strong stoppers,” who have a real demand for the deliverable commodity, place a higher value on it than others, and who will consume at least some of what is delivered to them, and (b) manipulators, who have no real demand for the deliverable and who will resell what is delivered. Shorts do not know which type is standing for delivery.

In the model, a long submits an offer to sell his futures position at a specified price prior to expiration. The strong stopper submits an offer above the price that would prevail in the absence of a strong stopper (reflecting his high valuation of the commodity). I show that under different out-of-equilibrium beliefs there is a pooling equilibrium in with the manipulator mimics a strong stopper, and submits a high offer price at which he is willing to liquidate.

In the pooling equilibrium, the shorts deliver a quantity that exceeds the quantity that they would deliver if they knew the long was a strong stopper: this reflects the fact that they realize that the manipulator will resell what is delivered, and the shorts can repurchase it at a depressed price. However, in this equilibrium the manipulator sells some of his futures position at a supercompetitive price, and earns a supercompetitive profit even though he has to “bury the corpse” of a manipulation.

Crucially, the manipulation can succeed even if the long’s position is smaller than inventories, as long as the flow supply curve is upward sloping at such quantities. The flow supply curve can be upward sloping merely due to the theory of storage: an anticipated depletion of stocks increases the value of the remaining inventory. Therefore, if shorts anticipate a positive probability that a long will consume what is delivered, the theory of storage implies that the supply of deliveries is an increasing function of the futures price at expiration.

Thus, a futures position in excess of inventories in deliverable position may be a sufficient condition to exercise market power, but it is not a necessary one. If shorts are uncertain about a long’s motive for taking delivery, and some longs are strong stoppers who will consume what is delivered and thereby deplete inventories, manipulators can mimic strong stoppers and extract a supercompetitive price even with a position smaller than inventories.

One implication of this analysis is that reliance on spot month position limits is not sufficient to prevent market power manipulations. Additional measures, what I have called “ex post deterrence” since my 1996 Washington and Lee Law Review article, are also necessary. In my earlier work I argued that they are necessary because it was unlikely that position limits could adjust to reflect inevitable changes in inventories. This new paper shows that even if they could so adjust limits, they would be inadequate. Market power manipulation facilitated by fraud (i.e., falsely pretending to have a real demand for the commodity) can occur even if position limits prevent a long from obtaining a position during the delivery period that exceeds stocks in deliverable position.

This analysis also implies that equating “deliverable supply” with “inventory in deliverable position” is wrong. The supply available at the competitive price may be smaller than inventories–and indeed, far smaller than inventories–when shorts do not know the “type” of long standing for delivery.

The traditional model of deliverable supply is predicated on a view of manipulation shaped by the big corners of history, in which there was little doubt about the motivations of a large long. But as the court in the Cargill case noted, “[t]he methods and techniques of manipulation are limited only by the ingenuity of man.” Exploiting shorts’ ignorance about his motive for taking delivery, a long can ingeniously exercise market power even with a position smaller than deliverable supply.

This is a possibility that is only dimly recognized in the existing regulatory structure in the US. Most importantly, it implies that a reliance on preventative measures like position limits alone is inadequate to reduce efficiently the frequency and severity of market power manipulation. Ex post measures are required as well.

Print Friendly, PDF & Email

June 19, 2019

Can You Spare Me a Zuck Buck? Spare me.

Filed under: Blockchain,Cryptocurrency,Economics,Politics,Regulation — cpirrong @ 3:08 pm

To huge fanfare, Facebook announced the impending release of a new cryptocurrency, “Libra.” Except it isn’t–a crypto, that is. Whereas real cryptocurrencies are decentralized, anonymous, unpermissioned, and lack trusted intermediaries, Libra is centralized, permissioned, non-anomymous and chock-full o’ intermediaries in addition to Facebook. It doesn’t really utilize a blockchain either.

Other than that . . .

For the best (IMO) take on the “Zuck Buck”, I heartily recommend FT Alphaville’s extended take–and takedown. I’ll just add a few comments.

First, when it comes to finance, there is little (if anything) new under the sun, and that is clearly true of Libra. The Alphaville stories provide several historical precedents, to which I’ll just add another. It is basically like pre-National Bank Act banking system in which banks issued bank notes that circulated as hand-to-hand media of exchange, and which were theoretically convertible into currency (gold prior to the Civil War) on demand. Libra is functionally equivalent to such bank notes, with the main distinction that it is represented by bytes rather than pieces of paper.

Facebook attempts to allay concerns about such a system by requiring 100 percent backing by bank deposits or low-default-risk government bonds, but as historical experience (some as recent as 2008) demonstrates, although such systems are less subject to runs than liabilities issued by entities that invest the proceeds in illiquid assets, they are not necessarily run-proof.

Furthermore, the economic model here isn’t that different from the 19th century bank model because the issuer can profit by investing the proceeds from the issue of the currency in interest bearing assets, and pocketing the interest. Those buying the currency forego interest income, and presumably are willing to do so because of it reduces the costs of engaging in various kinds of transactions.

This type of system faces different kinds of difficulties in low and high interest rate environments. In high rate environments, the opportunity cost of holding the currency is high, which leads to lower quantity demanded. In low rate environments, the revenue stream may be insufficient to cover the costs incurred by the intermediaries. This creates an incentive for asset substitution, i.e., to allow backing the currency with higher risk assets (with higher yields) thereby increasing insolvency and run risks.

I note in passing that low interest rates destroyed the traditional FCM model which relied on interest income from customer margins as a major revenue stream (as Facebook is proposing here). Ask John Corzine about that, and look to the experience of MF Global.

Why introduce this in a low interest rate environment? Maybe this is a kind of loss-leader strategy. The opportunity cost of holding Libra is low now (given low rates), so maybe a lot of people will buy in now. Even though the benefits to the issuers/intermediaries may be low now (because the interest income is low), they may be counting on customer stickiness once there is widespread adoption. That is, those who hold Libra when the cost of doing so is low may stick around even when the cost goes up substantially. That is, Facebook and its partners in this endeavor may be counting on some sort of switching cost or some behavioral irrationality to reduce the interest-rate sensitivity of demand for Libra.

Good luck with that. (For another example of nothing new under the sun, read up on disintermediation of traditional banks when interest bearing money market mutual funds came on the scene.)

I would also suggest that Libra has some disadvantages as a medium of exchange. For one thing, since assets will be held in multiple currencies, it creates currency risk for virtually everyone who uses it. For another, it involves additional cost to move from fiat into Libra and from Libra into fiat. This reduces the value of the Libra as a medium of exchange because of the resulting difference in cost in using it for within-network and off-network uses.

This last point relates to something else in the Libra white paper, namely, the claims that the currency will be a boon to the “unbanked.” This makes zero sense.

The reason that some people don’t have bank accounts is that the cost of servicing them (reflected in fees that banks charge) is above the willingness/ability of those people to pay for those services. There is no reason to believe that Libra reduces the cost of servicing the currently unbanked. Furthermore, the value of the services provided is likely to be lower, and substantially so because inter alia (a) the lack of brick an mortar facilities that low income people need for check cashing/depositing and cash depositing, (b) the restricted network of people with whom they can transact, and (c) currency risk.  Relatedly, it’s hard to see how one can move funds into our out of Libra without having access to banking services. I see the unbanked rhetoric as mere SJW eyewash attempting to make this look like some progressive social project.

The arrogance of Facebook is also rather astounding. Again, this is not crypto–it is banking. Yet Facebook presumes that it can do this without the panoply of licenses that banks must have, and without being subject to the same kinds of regulation as banks.

Because why? Trust me? Suuuurrreee, Mark.

Along these lines, note that the most benign interpretation behind Libra is that it is a narrow bank (100 percent reserve banking). But remember the Fed recently denied approval to TNB (“The Narrow Bank”) USA NA even though it was only going to offer deposits to “the most financially secure institutions” and explicitly eschewed providing retail banking services. Yet Marky et al expect the Fed (not to mention banking regulators in every other jurisdiction on the planet) to stand aside and let Facebook offer maybe (but maybe not) narrow banking services (with added currency risk!) to the great unwashed?

On what planet?

Note the furious government reactions to this, not just in the US but in Europe. Zuckerberg et al were totally delusional if they expected anything different, especially in light of Facebooks serial privacy, free-speech, and antitrust controversies.

In sum, in my opinion Libra faces serious economic and political/regulatory obstacles. Having politicians and regulators hate you isn’t bad per se in my book–it can actually represent an endorsement! But the economics of this are incredibly dodgy. My skepticism is only increased by the misleading packaging (crypto! a boon to the unbanked!) and the congenitally misleading packager.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress