Streetwise Professor

August 28, 2011

A Tale of Two Contracts

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — The Professor @ 2:27 pm

One of the leading crude oil futures contracts–CME Group’s WTI–has been the subject of a drumbeat of criticism for months due to the divergence of WTI prices in Cushing from prices at the Gulf, and from the price of the other main oil benchmark–Brent.  But whereas WTI’s problem is one of logistics that is in the process of being addressed, Brent’s issues are more fundamental ones related to adequate supply, and less amenable to correction.

Indeed, WTI’s “problem” is actually the kind an exchange would like to have.  The divergence between WTI prices in the Midcontinent and waterborne crude prices reflects a surge of production in Canada and North Dakota.  Pipelines are currently lacking to ship this crude to the Gulf of Mexico, and Midcon refineries are running close to full capacity, meaning that the additional supply is backing up in Cushing and depressing prices.

But the yawning gap between the Cushing price at prices at the Gulf is sending a signal that more transportation capacity is needed, and the market is responding with alacrity.  If only the regulators were similarly speedy.

Three companies are at various stages of planning new capacity to ship oil from the Midcontinent to the Gulf.  A fourth is looking to redirect flows from Cushing to Houston.  The addition to capacity would total 1.4 mm bbl/day if all were completed.  Since that’s more capacity than is needed, not all the projects will go to completion.  But regardless, over the medium term it is likely that new pipelines will break the bottleneck and crush the differentials between WTI and LLS, MARS, and Brent.

The main obstacles that the WTI contracts face are enviros and Coasean challenges.  Environmentalists have been trying to put every regulatory roadblock possible in the way of one of the pipeline projects–TransCanada’s Keystone XL. This would bring oil produced from oilsands from Alberta through Oklahoma and on to Texas.  The project cleared one roadblock last week, when the State Department (which has to approve due to the international nature of the pipeline) released an eight volume (!) study finding that the pipeline poses no significant environmental impacts.  The pipeline’s opponents are vowing to redouble their efforts, and the issue is now wrapped up in presidential election year politics.

The Coasean challenge is that reversal of the Seaway pipeline currently flowing from the Gulf to the Midcontinent would cost one of the owners–ConocoPhillips–money by raising input costs for its Midcontinent refineries.  Even though there are more than enough gains on the table to compensate CP for any losses arising from a rise in the price of Midcon crude, so far no one has been able to craft a deal whereby the winners (primarily Canadian and US producers) can make it worth CP’s while to agree to the reversal.

But these problems are all surmountable.  WTI’s problems arise from the consequences of too much supply at the delivery point, which is a good problem for a contract to have.  The price signals are leading to the kind of response that will eliminate the supply overhang, leaving the WTI contract with prices that are highly interconnected with those of seaborne crude, and with enough deliverable supply to mitigate the potential for squeezes and other technical disruptions.

Brent’s problems are more fundamental, because they arise from declining supply.  Even as paper volumes continue to rise, physical volumes available for delivery are falling inexorably.  The Brent complex had faced this problem before, and confronted it by adding Forties, Oseberg, and Ekofisk to the eligible stream.  But BFOE production has declined from 1.6 mm bbl/d in 2006 to barely more than half that today.  And the decline continues apace.  This makes the contract vulnerable to squeezes of a kind that were chronic in the 1990s and early 2000s, and which spurred Platts to add the three other grades to the benchmark.

There are, moreover, few additional North Sea oil stream that can be added to the benchmark this time.  So over the medium term Platts is considering adding other low sulphur crudes produced outside the North Sea to the contract.

The Brent problem is analogous to that faced by the Chicago Board of Trade grain and soybean contracts in the early-1990s.  Volumes of corn, wheat, and soybeans shipped into Chicago and Toledo–the delivery points–were falling, though not due to declining production, but due to changing trade patterns.  Deliverable supplies in Chicago and Toledo were not reliably sufficient to ensure the pricing integrity of the contract.  The Ferruzzi soybean squeeze in July, 1989 brought matters to a head and forced a reluctant CBT to act.

The CBT’s initial response was to tinker with the contract, adding St. Louis as a delivery point at a modest premium to Chicago.  Within a few short years, however, all delivery warehouses in Chicago proper had closed, and the exchange had to adopt a substantially different delivery mechanism (which I helped design–more on this below).

One approach available to Platts would be to create what I called an “economic par” contract in a report (subsequently a book–such a deal!) on the CBT contracts that I wrote in the aftermath of the Ferruzzi episode.  In an economic par contract, differentials for delivery of different grades (or at different locations) are set approximately equal to the cash market differentials.  For instance, if Brent was made the par grade, and another type of crude that typically sells for a $2 discount to Brent were made eligible for delivery against the contract, the deliverer would receive $2 less for delivering that grade than delivering Brent.

The advantages of this type of system are (a) it allows a dramatic expansion of deliverable supply, thereby easing technical/squeeze pressures on the contract, and (b) it can improve hedging effectiveness/reduce basis risk.  In essence (as pointed out in the book), options pricing theory implies that with economic par terms the futures price becomes like a weighted average of the prices of the deliverable grades.  This reduces the idiosyncratic component of futures price fluctuations, making the contract a better hedge for a variety of users.  (Cash settlement based on a variety of crude streams would produce a similar outcome.)  In the present instance, an economic par contract would be a poorer hedge for Brent cargoes, but a better hedge for other kinds of oil (e.g., Urals).

Since cash market differentials can vary over time, it is advisable to adjust the contract premia and discounts periodically.  (The failure to do so with Treasury Bond futures in the 1990s caused some problems with the contract that were eventually fixed by the change from an 8 percent par coupon to a 6 percent.)  Such changes can actually make the contract a better hedge by keeping the weights in the average more stable over time, thereby reducing the likelihood that the idiosyncratic risk of a particular deliverable grade exerts disproportionate influence on contract pricing.

Accomplishing such a substantial remake of a contract is, however, a difficult thing.  Contract changes have different effects on different players, and each will try to lobby for changes that suit its interests.  The case of Chicago grains is illustrative.  As the decline in Chicago and Toledo continued apace, and the tinkering that followed the Ferruzzi squeeze proved inadequate, the CBT formed a committee to come up with a new contract design.  The committee had representatives from virtually every affected party.  Since the interests of these parties were so divergent, the process became rancorous and political, and the committee could not come to agreement on the kinds of changes that would have been necessary to fix the problem.  Eventually, in late-1996 the CFTC said enough, and ordered the exchange to change the contract stat.

Realizing that the traditional committee method that gave everybody a voice would not meet the demands of an impatient CFTC, the exchange created a small task force (0f which I was an outside member) of more independent participants, and which pointedly excluded the big incumbent players (mainly Cargill, Continental, ADM, and The Andersons).  This Grain Delivery Task Force came up with a radical new design (based on delivery via shipping receipts into barges on the Illinois River) in less than 6 months.  The new design was approved by the membership and the CFTC over the following months.

Suffice it to say that the kind of consultative process that Platts envisions in revising the Brent contract will almost certainly bog down in the kind of rent seeking self-interested behavior that stymied fundamental changes in the CBT grain contracts.  (In one of the early go ’rounds on this, in 199o–which spawned my report/book–one of the members of the committee set up to revise the contract was so outraged by my recommendations for bigger changes that we almost came to blows.  The representatives from ADM and Cargill took him outside to cool down.  I had similar experience, though more civil, during discussions of revising the canola contract in Winnipeg a few years later.  Considering that the Brent market is so much bigger and the dollars at play in 2011 are so much bigger than two decades ago, the potential for pyrotechnics is all the greater now. )

Which means that those who are crowing about Brent today, and heaping scorn on WTI, will be begging for WTI’s problems in a few years.  For by then, WTI’s issues will be fixed, and it will be sitting astride a robust flow of oil tightly interconnected with the nexus of world oil trading.  But the Brent contract will be an inverted paper pyramid, resting on a thinner and thinner point of crude production.  There will be gains from trade–large ones–from redesigning the contract, but the difficulties of negotiating an agreement among numerous big players will prove nigh on to impossible to surmount.  Moreover, there will be no single regulator in a single jurisdiction that can bang heads together (for yes, that is needed sometimes) and cajole the parties toward agreement.

So Brent boosters, enjoy your laugh while it lasts.  It won’t last long, and remember, he who laughs last laughs best.

April 8, 2019

CDS: A Parable About How Smart Contracts Can Be Pretty Dumb

Filed under: Blockchain,Derivatives,Economics,Exchanges,Regulation,Russia — cpirrong @ 7:04 pm

In my derivatives classes, here and abroad, I always start out by saying that another phrase for “derivative” is contingent claim. Derivatives have payoffs that are contingent on something. For most contracts–a garden variety futures or option, for example–the contingency is a price. The payoff on WTI futures is contingent on the price of WTI at contract expiration. Other contracts have contingencies related to events. A weather derivative, for instance, which pays off based on heating or cooling degree days, or snowfall, or some other weather variable. Or a contract that has a payoff contingent on an official government statistic, like natural gas or crude inventories.

Credit default swaps–CDS–are a hybrid. They have payoffs that are contingent on both an event (e.g., bankruptcy) and a price (the price of defaulted debt). Both contingencies have proved very problematic in practice, which is one reason why CDS have long been in such disrepute.

The price contingency has proved problematic in part for the same reason that CDS exist. If there were liquid, transparent markets for corporate debt, who would need CDS?: just short the debt if you want to short the credit (and hedge out the non-credit related interest rate risk). CDS were a way to trade credit without trading the (illiquid) underlying debt. But that means that determining the price of defaulted debt, and hence the payoff to a CDS, is not trivial.

To determine a price, market participants resorted to auctions. But the auctions were potentially prone to manipulation, a problem exacerbated by the illiquidity of bonds and the fact that many of them were locked up in portfolios: deliverable supply is therefore likely to be limited, exacerbating the manipulation problem.

ISDA, the industry organization that largely governs OTC derivatives, introduced some reforms to the auction process to mitigate these problems. But I emphasize “mitigate” is not the same as “solve.”

The event issue has been a bane of the CDS markets since their birth. For instance, the collapse of Russian bond prices and the devaluation of the Ruble in 1998 didn’t trigger CDS payments, because the technical default terms weren’t met. More recently, the big issue has been engineering technical defaults (e.g., “failure to pay events”) to trigger payoffs on CDS, even though the name is not in financial distress and is able to service its debt.

ISDA has again stepped in, and implemented some changes:

Specifically, International Swaps and Derivatives Association is proposing that failing to make a bond payment wouldn’t trigger a CDS payout if the reason for default wasn’t tied to some kind of financial stress. The plan earned initial backing from titans including Goldman Sachs Group Inc.JPMorgan Chase & Co.Apollo Global Management and Ares Management Corp.

“There must be a causal link between the non-payment and the deterioration in the creditworthiness or financial condition of the reference entity,” ISDA said in its document.

Well that sure clears things up, doesn’t it?

ISDA has been criticized because it has addressed just one problem, and left other potential ways of manipulating events unaddressed. But this just points out an inherent challenge in CDS. In the case Cargill v. Hardin, the 7th Circuit stated that “the techniques of manipulation are limited only by the ingenuity of man.” And that goes triple for CDS. Ingenious traders with ingenious lawyers will find new techniques to manipulate CDS, because of the inherently imprecise and varied nature of “credit events.”

CDS should be a cautionary tale for something else that has been the subject of much fascination–so called “smart contracts.” The CDS experience shows that many contracts are inherently incomplete. That is, it is impossible in advance to specify all the relevant contingencies, or do so with sufficient specificity and precision to make the contracts self-executing and free from ambiguity and interpretation.

Take the “must be a causal link between the non-payment and the deterioration in the creditworthiness or financial condition of the reference entity” language. Every one of those words is subject to interpretation, and most of the interpretations will be highly contingent on the specific factual circumstances, which are likely unique to every reference entity and every potential default.

This is not a process that can be automated, on a blockchain, or anywhere else. Such contracts require a governance structure and governance mechanisms that can interpret the contractual terms in light of the factual circumstances. Sometimes those can be provided by private parties, such as ISDA. But as ISDA shows with CDS, and as financial exchanges (e.g., the Chicago Board of Trade) have shown over the years in simpler contracts such as futures, those private governance systems can be fragile, and themselves subject to manipulation, pressure, and rent seeking. (Re exchanges, see my 1994 JLE paper on exchange self-regulation of manipulation, and my 1993 JLS paper on the successes and failures of commodity exchanges.)

Sometimes the courts govern how contracts are interpreted and implemented. But that’s an expensive process, and itself subject to Type I and Type II errors.

Meaning that it can be desirable to create contracts that have payoffs that are contingent on rather complex events–as a way of allocating the risk of such events more efficiently–but such contracts inherently involve higher transactions costs.

This is not to say that this is a justification for banning them, or sharply circumscribing their use. The parties to the contracts internalize many of the transactions costs (though arguably not all, given that there are collective action issues that I discussed 10 years ago). To the extent that they internalize the costs, the higher costs limit utility and constrain adoption.

But the basic point remains. Specifying precisely and interpreting accurately the contingencies in some contingent claims contracts is more expensive than in others. There are many types of contracts that offer potential benefits in terms of improved allocation of risk, but which cannot be automated. Trying to make such contracts smart is actually pretty dumb.


January 5, 2010

A Tale of Two Papers, or, Humpty Dumpty Writes About Exchanges

Filed under: Uncategorized — The Professor @ 3:06 pm

The American Economic Association/American Finance Association Meetings are just about over.  I made a quick trip there to comment on a paper.  Upon returning home, I downloaded a couple of the papers presented that seemed of interest.  Good call on one, bad call on the other.

The bad one is “Centralized versus Over The Counter Markets” by Viral Acharya of LBS and NYU, and Alberto Bisin of NYU.  Although the motivation of the paper is admirable, the execution is execrable, and is representative of a lot of what is wrong in the profession.

The motivation is to compare the efficiencies of alternative ways of organizing derivatives trades: centralized exchanges and over-the-counter (OTC) markets.  Great.  Big question.  I’ve written a lot about it, and would be very interested in seeing other takes thoughtful on the subject.

The paper concludes that organized exchanges are (constrained) first best efficient, and more efficient than OTC markets.  A quick review of the paper makes it clear, however, that they’ve rigged the game to produce that result.

Specifically, Acharya and Bisin assume that there are different “types” of traders; they differ based on the characteristics of their endowments. Due to their finite wealth, and the risks of derivatives positions, traders are sometimes unable to meet their derivative contract obligations. So far, so good: all that makes sense.

Then they go off the rails.  Specifically, they assume that centralized exchanges set price schedules for a single derivative contract, and specifically, set a price for every trader that varies depending on (a) the trader’s type, and (b) the size of the trader’s positions.  They motivate (b) by arguing that exchanges can view a trader’s entire position.  That is, every trader pays a different price depending on his position and his type.  In contrast, in OTC markets, they assume that no counterparty can observe any trader’s entire position, and hence although prices can be conditioned on type they cannot be conditioned on position.

It’s no surprise, then, that they find exchanges to be a more efficient way of trading.  Exchange prices are conditioned on more information that is relevant in determining the likelihood of default, and the cost thereof, and hence can be set to control default and risk taking more efficiently than is possible in the OTC market.

Perhaps there is a planet somewhere in the universe in which exchanges can do what Acharya and Bisin claim they can do.  All I can say is that on earth they can’t and don’t.  In fact, the situation here on the planet in the Milky Way with the blue sky is almost an inversion of the what they assume.

Specifically, exchanges don’t condition trade prices on anything.  Indeed, the whole idea of clearing on an exchange is to create fungible contracts in which prices are the same for all traders at a given point in time, and don’t depend on the traders’ endowments or positions.  That is, exchanges facilitate anonymous trading among consenting adults by taking actions that make trader types and positions irrelevant to the price determination process.

In reality, collateral depends on position size, but not trade prices.  Moreover, exchanges quite specifically do not condition collateral terms on trader type (e.g., on traders’ balance sheets).  So, even if you argue that collateral variations are effectively price variations, the Acharya-Bisin assumptions are unwarranted. Exchanges are not, as Acharya and Bisin assume, central planners implementing a centrally-imposed price schedule conditional on fine information on trader type and trades.

Put differently, clearing on exchanges creates a potential moral hazard that exchanges control at cost.  The cost can be sufficiently high to make it inefficient to adopt clearing.  Acharya-Bisin assume that exchanges have and can use costlessly the information required to control this moral hazard.  Again, an inversion of the truth.

Also in reality, OTC market participants do condition trading prices on counterparty type, and explicitly take balance sheet risk into account.  Although OTC traders don’t view their counterparties’ entire exposures/positions, they do have noisy signals on this and take it into account when setting trading terms.  Moreover, as I’ve argued before, there are a variety of channels (e.g., the repricing of short term debt) by which diffuse information on the entire risk exposure of a financial institution is aggregated, and affects the incentives of said institution to take on risk.

So, the tradeoffs between exchanges and OTC markets are more complex: exchanges/CCPs have better information on some dimensions, worse on others.  It’s not altogether clear a priori which dominates.  Moreover, there are institutional innovations other than exchange trading and clearing that can mitigate the OTC markets possible information disadvantage: for instance, data warehouses that collect and disseminate information on positions could remedy the supposed weakness of the OTC markets.

And here’s where the Coase question comes into play: If a particular institution is inefficient, what transaction costs precludes agents from adopting it?  Or, to put things differently, if you believe Acharya and Bisin, agents are leaving a lot of money on the table by trading in the OTC market: there are gains to trade to be captured by trading on exchanges.  Why isn’t this happening?  Why does a putatively inefficient mechanism survive, and indeed, thrive and grow relative to the putatively efficient one?  This can happen, and policy can perhaps improve efficiency, but someone who claims that a given arrangement is inefficient, as Acharya and Bisin do, should identify the friction or frictions that precludes a change.  They, suffice it to say, don’t do this.

This model wouldn’t be objectionable if it were a purely theoretical exercise.  But Acharya and Bisin make broad policy recommendations based on their analysis.  The problem is, they are like Humpty Dumpty, who declared: “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

Humpty Dumpty-like, they use the word “exchange” to mean just what they choose it to mean, even though it bears no relation to exchanges in the real world, or the kinds of exchanges that would be the beneficiaries of exchange trading mandates, or efforts to raise the cost of OTC trading.  Their recommendation is a sort of bait and switch: sell the concept of exchange trading on the basis of an idealized model that assumes exchanges have decisive information advantages, and then deliver real world exchanges that are much more limited.

This is not uncommon, sad to say.  But it is pernicious.  And if this is the best that can be done to support exchange trading mandates, well, there you go.

The other, much better paper is “Danger on the Exchange: Counterparty Risk on the Paris Exchange in the 19th Century” by Angelo Riva of EBS and U-Paris X, and Eugene White of Rutgers.  Rather than construct otherworldy exchanges and decree that their visions should be implemented here on earth, Riva and White do something radical:  they actually take a detailed look at the evolution of counterparty risk sharing mechanisms and defaults on a real world exchange–the Paris Bourse.  They carefully trace the evolution of mutualization of countparty risk on the PB, and pay close attention to a very real-world problem: namely, that mutualization creates the potential for moral hazard that is costly to control.

The empirical analysis could be improved (and when I have some time I’ll pass on some suggestions to them), but this paper undertakes a detailed analysis of actual institutions, and actual agents grappling with hard problems.  The Paris Bourse was a real exchange.  The problems it faced are inherent to any efforts to manage performance risk.  We can learn a lot more from these sorts of analyses, than we can from constructing fanciful exchanges in a galaxy far away.  Formalism is valuable–I even commit formalism from time to time, myself–but it is important that if formal models are intended to be the basis for far reaching policy conclusions, that they bear some relation to reality.   Acharya and Bisin would put their modeling talents to much better use by trying to construct theories that reflect the gritty realities that Riva and White quite clearly document, instead of building models that bring to mind all those economist jokes with punch lines like “I assume the existence of a 100 story ladder.”

February 20, 2016

Brent: The George Washington’s Axe of Oil Pricing Markers

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — The Professor @ 12:53 pm

Platts is preparing for a Brentless future by introducing a new Dated Brent CIF Rotterdam assessment.  The idea is that as North Sea production continues its decline, other streams of light crude that are imported into Rotterdam can be added to the assessment. Adding (or substituting) say Nigerian crude to the FOB Brent assessment would be much more difficult because of locational differences: FOB Nigeria and FOB North Sea can be quite different, even adjusting for quality, due to freight differentials. A CIF contract eliminates that.

The decline in North Sea production has been occurring for some time, so the need to adjust the pricing mechanism has been apparent. Plants has been thrashing around for a while, mooting the possibility of adding other crude (e.g., Urals Med) to the assessment.  It had already widened the delivery window to make more cargoes eligible (remember 15 Day Brent? 21? It’s now 25 Day.) This problem has become more pressing though, as the decline in prices is hastening the decline in North Sea production.

It is ironic that at the same time that Platts (and therefore, ICE) are grappling with the problem of  declining supply, the main rival to Brent has the exact opposite problem. The WTI contract is currently drowning in oil. Storage at Cushing is bumping up against capacity, and there are reports that some storage operators there are refusing requests to store additional crude.

Although the current situation at Cushing (and in the North American market generally) is as much a demand story as a supply story, the facts are that (a) the NYMEX WTI contract is linked to a much more robust and flexible production base than Brent, and (b) the WTI contract’s periodic difficulties are due to infrastructure issues that are more readily, cheaply, and rapidly addressed than production issues. Thus it has been for the past five years or so, as I discussed when I wrote that those foretelling the demise of WTI were fundamentally mistaken:

But these problems are all surmountable.  WTI’s problems arise from the consequences of too much supply at the delivery point, which is a good problem for a contract to have.  The price signals are leading to the kind of response that will eliminate the supply overhang, leaving the WTI contract with prices that are highly interconnected with those of seaborne crude, and with enough deliverable supply to mitigate the potential for squeezes and other technical disruptions.

Brent’s problems are more fundamental, because they arise from declining supply.  Even as paper volumes continue to rise, physical volumes available for delivery are falling inexorably.  The Brent complex had faced this problem before, and confronted it by adding Forties, Oseberg, and Ekofisk to the eligible stream.  But BFOE production has declined from 1.6 mm bbl/d in 2006 to barely more than half that today.  And the decline continues apace.  This makes the contract vulnerable to squeezes of a kind that were chronic in the 1990s and early 2000s, and which spurred Platts to add the three other grades to the benchmark.

. . . .

Which means that those who are crowing about Brent today, and heaping scorn on WTI, will be begging for WTI’s problems in a few years.  For by then, WTI’s issues will be fixed, and it will be sitting astride a robust flow of oil tightly interconnected with the nexus of world oil trading.  But the Brent contract will be an inverted paper pyramid, resting on a thinner and thinner point of crude production.  There will be gains from trade–large ones–from redesigning the contract, but the difficulties of negotiating an agreement among numerous big players will prove nigh on to impossible to surmount.  Moreover, there will be no single regulator in a single jurisdiction that can bang heads together (for yes, that is needed sometimes) and cajole the parties toward agreement.

The CIF alternative makes sense, and is probably superior to the “economic par” contract I suggested in the 2011 post. But once you move to a Rotterdam pricing basis, why remain tied to an assessment mechanism based on transactions in immense full cargoes? The large size of the lots inherently limits the number of transactions, which makes the assessment mechanism more erratic and subject to manipulation. The lumpiness of the market has also led Platts to design a baroque process involving bids and offers, contracts for differences, futures prices, spreads, etc., to increase the number of trades that go into the assessment.

The WTI contract, in contrast, is based on delivery in store of modest-sized (1000 barrel) units of crude. This is much more flexible, and permits a large number of firms to participate in the delivery process. This makes delivery and the threat of delivery a reliable and efficient way of ensuring convergence of futures to cash market values. The mechanism is not immune to all types of manipulation: delivery squeezes are still possible (though relatively unlikely in current market conditions). But small numbers of transactions can’t have a pronounced impact on pricing, and the in store delivery mechanism does not rely on an arcane and mysterious assessment mechanism (which also helps to enrich the party making the assessment).

So rather than shifting to a Rotterdam CIF mechanism, why not shift the futures market to a Rotterdam in store delivery contract? This mechanism is more flexible and resilient in the short run, and is readily adjusted in the long run to respond to changes in the underlying physical production base as NYMEX did by adding foreign crude streams (including Brent) to address the (then) declining domestic production base.

I can see why Platts wouldn’t like this, but it has some decided advantages for ICE, not the least of which is reducing its dependence on Platts. Given the difficulties of changing contract specifications, or generating liquidity for even a better contract introduced in competition with an established liquid one, I doubt this will happen. Which means that ICE, and the market generally, will have to continue to endure periodic changes the “Brent” assessment mechanism as North Sea production continues to decline.

I put “Brent” in quotes because the handwriting is on the wall: any future European-based contract may be called “Brent” even after Brent (and Forties, and Oseberg, and Ekofisk) no longer represent the bulk of the benchmark stream. The contract will come to resemble the old Harry Anderson comedy bit, where he juggled a chainsaw and an axe. He would stop juggling, hold up the axe and say: “This is George Washington’s axe. The handle was replaced years ago, and I just put on a new head, but it’s George Washington’s axe!” So it will be with Brent, and sooner than anyone would have thought even a few short years ago.

June 15, 2015

Always Follow the Price Signals. I Did on Brent-WTI.

Filed under: Commodities,Derivatives,Economics,Energy,Politics,Regulation — The Professor @ 8:18 pm

As a blogger, I am long the option to point out when I call one right. Of course, I am short the option for you all to point out when I call one wrong, but I can’t help it if that option is usually so far out of the money (or if you don’t exercise it when it is in) 😉

I will exercise my option today, after reading this article by Greg Meyer in the FT:

West Texas Intermediate crude, once derided as a broken oil benchmark, is enjoying a comeback.

Volumes of futures tracking the yardstick have averaged 1m contracts a day this year through May, up more than 45 per cent from the same period of 2014, exchange data show. WTI has also sped ahead of volumes in rival Brent crude, less than two years after Brent unseated WTI as the most heavily traded oil futures market.

. . . .

WTI has also regained a more stable connection with global oil prices after suffering glaring discounts because of transport constraints at its delivery point of Cushing, Oklahoma. The gap led some to question WTI as a useful gauge of oil prices.

“I guess the death of the WTI contract was greatly exaggerated,” said Andy Lipow of consultancy Lipow Oil Associates.

But in the past two years, new pipeline capacity of more than 1m barrels a day has relinked Cushing to the US Gulf of Mexico coast, narrowing the discount between Brent and WTI to less than $4 a barrel.

Mark Vonderheide, managing partner of Geneva Energy Markets, a New York trading firm, said: “With WTI once again well connected to the global market, there is renewed interest from hedgers outside the US to trade it. When the spread between WTI and Brent was more than $20 and moving fast, WTI was much more difficult to trade.”

Things have played out exactly as I forecast in August, 2011:

One of the leading crude oil futures contracts–CME Group’s WTI–has been the subject of a drumbeat of criticism for months due to the divergence of WTI prices in Cushing from prices at the Gulf, and from the price of the other main oil benchmark–Brent.  But whereas WTI’s problem is one of logistics that is in the process of being addressed, Brent’s issues are more fundamental ones related to adequate supply, and less amenable to correction.

Indeed, WTI’s “problem” is actually the kind an exchange would like to have.  The divergence between WTI prices in the Midcontinent and waterborne crude prices reflects a surge of production in Canada and North Dakota.  Pipelines are currently lacking to ship this crude to the Gulf of Mexico, and Midcon refineries are running close to full capacity, meaning that the additional supply is backing up in Cushing and depressing prices.

But the yawning gap between the Cushing price at prices at the Gulf is sending a signal that more transportation capacity is needed, and the market is responding with alacrity.  If only the regulators were similarly speedy.

. . . .

Which means that those who are crowing about Brent today, and heaping scorn on WTI, will be begging for WTI’s problems in a few years.  For by then, WTI’s issues will be fixed, and it will be sitting astride a robust flow of oil tightly interconnected with the nexus of world oil trading.  But the Brent contract will be an inverted paper pyramid, resting on a thinner and thinner point of crude production.  There will be gains from trade–large ones–from redesigning the contract, but the difficulties of negotiating an agreement among numerous big players will prove nigh on to impossible to surmount.  Moreover, there will be no single regulator in a single jurisdiction that can bang heads together (for yes, that is needed sometimes) and cajole the parties toward agreement.

So Brent boosters, enjoy your laugh while it lasts.  It won’t last long, and remember, he who laughs last laughs best.

This really wasn’t that hard a call to make. The price signals were obvious, and its always safe to bet on market participants responding to price signals. That’s exactly what happened. The only surprising thing is that so few publicly employed this logic to predict that the disconnection between WTI and ocean borne crude prices would be self-correcting.

Speaking of not enjoying the laugh, the exchange where Brent is traded-ICE-issued a rather churlish statement:

Atlanta-based ICE blamed the shift on “increased volatility in WTI crude oil prices relative to Brent crude oil prices, which drove more trading by non-commercial firms in WTI, as well as increased financial incentive schemes offered by competitors”.

The first part of this statement is rather incomprehensible. Re-linking WTI improved the contract’s effectiveness as a hedge for crude outside the Mid-continent (PADD 2), which allowed hedgers to take advantage of the WTI liquidity pool, which in turn attracted more speculative interest.

Right now the only potential source of disconnect is the export ban. That is, markets corrected the infrastructure bottleneck, but politics has failed to correct the regulatory bottleneck. When that will happen, I am not so foolish to predict.

 

 

 

July 25, 2014

Benchmark Blues

Pricing benchmarks have been one of the casualties of the financial crisis. Not because the benchmarks-like Libor, Platts’ Brent window, ISDA Fix, the Reuters FX window or the gold fix-contributed in an material way to the crisis. Instead, the post-crisis scrutiny of the financial sector turned over a lot of rocks, and among the vermin crawling underneath were abuses of benchmarks.

Every major benchmark has fallen under deep suspicion, and has been the subject of regulatory action or class action lawsuits. Generalizations are difficult because every benchmark has its own problems. It is sort of like what Tolstoy said about unhappy families: every flawed benchmark is flawed in its own way. Some, like Libor, are vulnerable to abuse because they are constructed from the estimates/reports of interested parties. Others, like the precious metals fixes, are problematic due to a lack of transparency and limited participation. Declining production and large parcel sizes bedevil Brent.

But some basic conclusions can be drawn.

First-and this should have been apparent in the immediate aftermath of the natural gas price reporting scandals of the early-2000s-benchmarks based on the reports of self-interested parties, rather than actual transactions, are fundamentally flawed. In my energy derivatives class I tell the story of AEP, which the government discovered kept a file called “Bogus IFERC.xls” (IFERC being an abbreviation for Inside Ferc, the main price reporting publication for gas and electricity) that included thousands of fake transactions that the utility reported to Platts.

Second, and somewhat depressingly, although benchmarks based on actual transactions are preferable to those based on reports, in many markets the number of transactions is small. Even if transactors do not attempt to manipulate, the limited number of transactions tends to inject some noise into the benchmark value. What’s more, benchmarks based on a small number of transactions can be influenced by a single trade or a small number of trades, thereby creating the potential for manipulation.

I refer to this as the bricks without straw problem. Just like the Jews in Egypt were confounded by Pharoh’s command to make bricks without straw, modern market participants are stymied in their attempts to create benchmarks without trades. This is a major problem in some big markets, notably Libor (where there are few interbank unsecured loans) and Brent (where large parcel sizes and declining Brent production mean that there are relatively few trades: Platts has attempted to address this problem by expanding the eligible cargoes to include Ekofisk, Oseberg, and Forties, and some baroque adjustments based on CFD and spread trades and monthly forward trades). This problem is not amenable to an easy fix.

Third, and perhaps even more depressingly, even transaction-based benchmarks derived from markets with a decent amount of trading activity are vulnerable to manipulation, and the incentive to manipulate is strong. Some changes can be made to mitigate these problems, but they can’t be eliminated through benchmark design alone. Some deterrence mechanism is necessary.

The precious metals fixes provide a good example of this. The silver and gold fixes have historically been based on transactions prices from an auction that Walras would recognize. But participation was limited, and some participants had the market power and the incentive to use it, and have evidently pushed prices to benefit related positions. For instance, in the recent allegation against Barclays, the bank could trade in sufficient volume to move the fix price sufficiently to benefit related positions in digital options. When there is a large enough amount of derivatives positions with payoffs tied to a benchmark, someone has the incentive to manipulate that benchmark, and many have the market power to carry out those manipulations.

The problems with the precious metals fixes have led to their redesign: a new silver fix method has been established and will go into effect next month, and the gold fix will be modified, probably along similar lines. The silver fix will replace the old telephone auction that operated via a few members trading on their own account and representing customer orders with a more transparent electronic auction operated by CME and Reuters. This will address some of the problems with the old fix. In particular, it will reduce the information advantage that the fixing dealers had that allowed them to trade profitably on other markets (e.g.,. gold futures and OTC forwards and options) based on the order flow information they could observe during the auction. Now everyone will be able to observe the auction via a screen, and will be less vulnerable to being picked off in other markets. It is unlikely, however, that the new mechanism will mitigate the market power problem. Big trades will move markets in the new auction, and firms with positions that have payoffs that depend on the auction price may have an incentive to make those big trades to advantage those positions.

Along these lines, it is important to note that many liquid and deep futures markets have been plagued by “bang the close” problems. For instance, Amaranth traded large volumes in the settlement period of expiring natural gas futures during three months of 2006 in order to move prices in ways that benefited its OTC swaps positions. The CFTC recently settled with the trading firm Optiver that allegedly banged the close in crude, gasoline, and heating oil in March, 2007. These are all liquid and deep markets, but are still vulnerable to “bullying” (as one Optiver trader characterized it) by large traders.

The incentives to cause an artificial price for any major benchmark will always exist, because one of the main purposes of benchmarks is to provide a mechanisms for determining cash flows for derivatives. The benchmark-derivatives market situation resembles an inverted pyramid, with large amounts cash flows from derivatives trades resting on a relatively small number of spot transactions used to set the benchmark value.

One way to try to ameliorate this problem is to expand the number of transactions at the point of the pyramid by expanding the window of time over which transactions are collected for the purpose of calculating the benchmark value: this has been suggested for the Platts Brent market, and for the FX fix. A couple of remarks. First, although this would tend to mitigate market power, it may not be sufficient to eliminate the problem: Amaranth manipulated a price that was based on a VWAP over a relatively long 30 minute interval. In contrast, in the Moore case (a manipulation case involving platinum and palladium brought by the CFTC) and Optiver, the windows were only two minutes long. Second, there are some disadvantages of widening the window. Some market participants prefer a benchmark that reflects a snapshot of the market at a point in time, rather than an average over a period of time. This is why Platts vociferously resists calls to extend the duration of its pricing window. There is a tradeoff in sources of noise. A short window is more affected by the larger sampling error inherent in the smaller number of transactions that occurs in a shorter interval, and the noise resulting from greater susceptibility to manipulation when a benchmark is based on smaller number of trades. However, an average taken over a time interval is a noisy estimate of the price at any point of time during that interval due to the random fluctuations in the “true” price driven by information flow. I’ve done some numerical experiments, and either the sampling error/manipulation noise has to be pretty large, or the volatility of the “true” price must be pretty low for it to be desirable to move to a longer interval.

Other suggestions include encouraging diversity in benchmarks. The other FSB-the Financial Stability Board-recommends this. Darrel Duffie and Jeremy Stein lay out the case here (which is a lot easier read than the 750+ pages of the longer FSB report).

Color me skeptical. Duffie and Stein recognize that the market has a tendency to concentrate on a single benchmark. It is easier to get into and out of positions in a contract which is similar to what everyone else is trading. This leads to what Duffie and Stein call “the agglomeration effect,” which I would refer to as a “tipping” effect: the market tends to tip to a single benchmark. This is what happened in Libor. Diversity is therefore unlikely in equilibrium, and the benchmark that survives is likely to be susceptible to either manipulation, or the bricks without straw problem.

Of course not all potential benchmarks are equally susceptible. So it would be good if market participants coordinated on the best of the possible alternatives. As Duffie and Stein note, there is no guarantee that this will be the case. This brings to mind the as yet unresolved debate over standard setting generally, in which some argue that the market’s choice of VHS over the allegedly superior Betamax technology, or the dominance of QWERTY over the purportedly better Dvorak keyboard (or Word vs. Word Perfect) demonstrate that the selection of a standard by a market process routinely results in a suboptimal outcome, but where others (notably Stan Lebowitz and Stephen Margolis) argue that  these stories of market failure are fairy tales that do not comport with the actual histories. So the relevance of the “bad standard (benchmark) market failure” is very much an open question.

Darrel and Jeremy suggest that a wise government can make things better:

This is where national policy makers come in. By speaking publicly about the advantages of reform — or, if necessary, by using their power to regulate — they can guide markets in the desired direction. In financial benchmarks as in tap water, markets might not reach the best solution on their own.

Putting aside whether government regulators are indeed so wise in their judgments, there is  the issue of how “better” is measured. Put differently: governments may desire a different direction than market participants.

Take one of the suggestions that Duffie and Stein raise as an alternative to Libor: short term Treasuries. It is almost certainly true that there is more straw in the Treasury markets than in any other rates market. Thus, a Treasury bill-based benchmark is likely to be less susceptible to manipulation than any other market. (Though not immune altogether, as the Pimco episode in June ’05 10 Year T-notes, the squeezes in the long bond in the mid-to-late-80s, the Salomon 2 year squeeze in 92, and the chronic specialness in some Treasury issues prove.)

But that’s not of much help if the non-manipulated benchmark is not representative of the rates that market participants want to hedge. Indeed, when swap markets started in the mid-80s, many contracts used Treasury rates to set the floating leg. But the basis between Treasury rates, and the rates at which banks borrowed and lent, was fairly variable. So a Treasury-based swap contract had more basis risk than Libor-based contracts. This is precisely why the market moved to Libor, and when the tipping process was done, Libor was the dominant benchmark not just for derivatives but floating rate loans, mortgages, etc.

Thus, there may be a trade-off between basis risk and susceptibility to manipulation (or to noise arising from sampling error due to a small number of transactions or averaging over a wide time window). Manipulation can lead to basis risk, but it can be smaller than the basis risk arising from a quality mismatch (e.g., a credit risk mismatch between default risk-free Treasury rates and a defaultable rate that private borrowers pay). I would wager that regulators would prefer a standard that is less subject to manipulation, even if it has more basis risk, because they don’t internalize the costs associated with basis risk. Market participants may have a very different opinion. Therefore, the “desired direction” may depend very much on whom you ask.

Putting all this together, I conclude we live in a fallen world. There is no benchmark Eden. Benchmark problems are likely to be chronic for the foreseeable future. And beyond. Some improvements are definitely possible, but benchmarks will always be subject to abuse. Their very source of utility-that they are a visible price that can be used to determine payoffs on vast sums of other contracts-always provides a temptation to manipulate.

Moving to transactions-based mechanisms eliminates outright lying as a manipulation strategy, but it does not eliminate the the potential for market power abuses. The benchmarks that would be least vulnerable to market power abuses are not necessarily the ones that best reflect the exposures that market participants face.

Thus, we cannot depend on benchmark design alone to address manipulation problems. The means, motive, and opportunity to manipulate even transactions-based benchmarks will endure. This means that reducing the frequency of manipulation requires some sort of deterrence mechanism, either through government action (as in the Libor, Optiver, Moore, and Amaranth cases) or private litigation (examples of which include all the aforementioned cases, plus some more, like Brent).  It will not be possible to “solve” the benchmark problems by designing better mechanisms, then riding off into the sunset like the Lone Ranger. Our work here will never be done, Kimo Sabe.*

* Stream of consciousness/biographical detail of the day. The phrase “Kimo Sabe” was immortalized by Jay Silverheels-Tonto in the original Lone Ranger TV series. My GGGGF, Abel Sherman, was slain and scalped by an Indian warrior named Silverheels during the Indian War in Ohio in 1794. Silverheels made the mistake of bragging about his feat to a group of lumbermen, who just happened to include Abel’s son. Silverheels was found dead on a trail in the woods the next day, shot through the heart. Abel (a Revolutionary War vet) was reputedly the last white man slain by Indians in Washington County, OH. His tombstone is on display in the Campus Martius museum in Marietta. The carving on the headstone is very un-PC. It reads:

Here lyes the body of Abel Sherman who fell by the hand of the Savage on the 15th of August 1794, and in the 50th year of  his age.

Here’s a picture of it:

OLYMPUS DIGITAL CAMERA

The stream by which Abel was killed is still known as Dead Run, or Dead Man’s Run.

September 22, 2013

The BIS Swings and Misses

The BIS has been one of the major advocates for mandated clearing.  They have produced an analysis claiming that mandated clearing will increase GDP growth by .1 percent or more per annum.  I criticized this calculation claiming that it was predicated on a fallacy: namely, that multilateral netting and collateralization, result in a reduction in the costs of OTC derivatives borne by banks, and thereby reduce the risk that they will become dangerously leveraged.  In fact, these measures redistribute losses, and will not affect the overall leverage of a financial institution in the event of an adverse shock to its balance sheet.

Stephen Cecchetti, the head of the BIS’s Monetary and Economics Department has responded to this sort of criticism.  Here’s his argument regarding multilateral netting:

Before turning to the costs, I think it is worth responding to criticism of this approach. First, some critics have argued that, by focusing on derivatives exposures, the Group has ignored the impact of multilateral netting on other unsecured creditors. The main claim is that multilateral netting dilutes other unsecured creditors.

This is correct. Multilateral netting does dilute non-derivatives-related claims to some extent. However, this is neither new, nor is it unique to derivatives. In fact, repos, covered bonds and any other secured loans result in dilution and subordination. For repos, that counterparties can close out a position and seize collateral in default has led to comment and worry for some time. Given that this is all well known, I would think that it is already reflected in the pricing of the instruments involved. Presumably no one will be terribly surprised by this when it is applied to derivatives, and so the impact will be muted. It is a stretch to see how this redistribution of a part of the risk associated with OTC derivatives transactions increases systemic risk.

This argument totally fails to meet the criticism.

First, there has been some repricing, but it is incomplete.  Repricing only takes place to the extent that adoption of mandated clearing occurs, or is at least anticipated, and does not occur for derivatives contracts and other liabilities until they mature.  Since many derivatives and other liabilities have maturities extending well beyond the clearing implementation dates, these have yet to be repriced.

Yes, the most run-prone liabilities, such as short term debt, have been repriced, but that’s not all that important.  Even if banks adjust their capital structures to reflect the repricing, they will still have large quantities of such liabilities which are now more junior and which will pay off less in states of the world when a bank is insolvent.  The higher yield received in normal times compensates the creditors for the lower returns that they get in bad states of the world, and most notably in crisis times.  And it is exactly during these crisis times that these liabilities are a problem.  Due to repricing, there may be a lower quantity of such short term debt, but this debt will be more vulnerable to runs as a result of the subordination inherent in multilateral netting.  Excuse me if I don’t consider that a clear win for reduced systemic risk, and fear that it in fact represents an increased systemic risk.  That is no stretch at all.  None.  Cecchetti’s belief that it is a stretch reflects a cramped and incomplete analysis of the implications of subordination.

In terms of the BIS’s claim that will boost economic growth, Cecchetti’s argument does not rebut in any way what I have been saying.  The BIS argument is based on a belief that netting makes the pie bigger.  My argument is that it does not make the pie bigger, but just resizes the pieces, making some smaller and others bigger.  Nothing in what Cecchetti writes demonstrates the opposite, and in fact, his acknowledgement that netting dilutes other creditors is an admission that the effects of netting are redististributive.

Once that is recognized, the entire premise behind the BIS macroeconomic analysis of the OTC derivative reforms collapses, and the conclusions collapse right along with it.

Cicchetti mischaracterizes the critique when he insinuates that it means that netting increases systemic risk.  I’ve said that’s one possible outcome, but mainly have used the redistribution point to refute the claim (made by the BIS and others) that netting reduces systemic risk. The channel by which the BIS claims it will is predicated on the belief that netting reduces default losses rather than reallocates them.  If they only reallocate them-which Cicchetti effectively acknowledges-this channel is closed, and the asserted benefit does not exist.  It’s very simple.

Therefore, Cicchetti may “remain convinced that the Group’s analysis accurately captures the benefits of the  proposed reforms,” but his conviction is based on faith rather than economic reasoning: his attempted defense notwithstanding, the Group’s analysis is contrary to the economics.

Here’s what Cicchetti says about collateral:

A second concern that has been raised is that the regulatory insistence on increased collateralisation will simply redistribute counterparty credit risk, not reduce it. To see the point, take the simple example of an interest rate swap. The primary purpose of the swap is to transfer interest rate risk. But the mechanics of the swap mean that there is always a risk that the parties involved will not pay. This is a credit risk. In the case where the swap is completely uncollateralised, it is clear that the instrument combines these two risks: interest rate (or market) risk, and credit risk.

Now think of what happens if there is collateralisation. At first it appears that the credit risk disappears, especially if there is both initial margin to cover unexpected market movements and variation margin to cover realised ones. But the collateral has to come from somewhere. Getting hold of it by borrowing, for example, will once again create credit risk.

The point is that, by collateralising the transaction, the market and credit risk are unbundled. I would argue fairly strenuously that unbundling is the right thing to do. Unbundling forces both the buyer and the seller to manage both the interest rate and the counterparty credit risks embedded in a swap contract. In the past, some parties seem to have simply ignored the credit component. Unbundling sheds light on the pricing of the two components of the contract. A more transparent market structure with more competitive pricing will almost surely result in better decisions and hence better risk management, risk allocation and ultimately lower systemic risk. The AIG example is a cautionary tale that leads us in this direction.

I agree completely that clearing unbundles price and credit risks.  This is particularly true under a defaulter pays model, in which the CCP members bear very little default risk.   In fact, this is the focus of a chapter I’m writing for my next book.

But Cicchetti’s assertion that unbundling is a good thing begs a huge question: why were risks bundled in the first place?  By way of explanation, sort of, Cicchetti asserts, without a shred of evidence, that market participants often ignored credit risk bundled in derivatives trades.  Even if that’s true, why would they necessarily take it into consideration merely because it’s unbundled?  I think the most that can be said is that ex post it appears that market participants underestimated credit risk prior to the crisis.   And if they did this with derivatives, they did it with unbundled credits too: look at the massive repricing of corporate paper and the virtual disappearance of unsecured interbank lending starting in August 2007.

But more substantively, there can be good reasons for bundling market risk and credit risk.  I explore these reasons in detail in my Rocket Science paper, which has been around for years.  In particular, there can be informational synergies.  These are quite ubiquitous in banking, and explain a variety of phenomena, such as compensating balances which require firms that borrow from a bank to hold some portion of the loan in deposits at the same bank: this is a form of bundling.  Moreover, bundling can be a way of controlling a form of moral hazard, namely asset substitution/diversion.

At the very least, bundling is so ubiquitous in banking (and finance generally, e.g., trade credit) that it is extremely plausible that there are good economic reasons for it.  The reasons for this practice should be understood before implementing massive policy changes that forcibly eliminate it for massive quantities of contracts.  It is rather frightening, actually, that Cicchetti/the BIS are so cavalier about this issue, and are so confident that they know better than market participants.

And again, even if credit risk is priced more accurately in an unbundled world (which Cicchetti asserts rather than demonstrates), bank capital structures in an unbundled world can be fragile and a source of systemic risk.  For instance, collateral transformation trades used to acquire collateral to post as CCP margin are arguably very fragile and systemically risky even if they are priced correctly.

What’s needed is a comparative analysis of the fragility/systemic riskiness of the bundled and unbundled structures, and this BIS/Cicchetti do not provide.

Cicchetti’s speech suggests that BIS has heard the criticisms of clearing mandates, but doesn’t really understand them, or is so invested in clearing mandates that it refuses to take them seriously.  Regardless of the reasons, one thing is clear.  The BIS has taken a big swing at a rebuttal, and missed badly.

March 19, 2013

Home Court Advantage, and the Further Miracles of Judo

Filed under: Derivatives,Economics,Politics,Regulation,Russia — The Professor @ 6:46 pm

Just because I find the expropriation of Russian deposits in Cyprus wrong doesn’t mean that I’ve gone soft on Russia.  To the contrary, my criticism of Russia and my criticism of the Cypriot confiscation grow from the same roots: a belief in the rule of law, and a deep dislike for the natural state.

A couple of stories along those lines.  First, a Russian court permitted a Russian firm, Agroterminal, to walk away from an interest rate swap it had entered with the Italian bank Unicredit:

The court’s ruling was based on a clause in the swap documentation that says a party can unilaterally terminate if there are no outstanding obligations at that point. Agroterminal had made one of the swap’s quarterly payments to UniCredit Bank and terminated immediately afterwards, arguing that as the new quarterly payment had not been calculated, there was no outstanding obligation.

Put differently, per the Russian court’s interpretation, the party to a swap has the ability to walk away at any time between payment dates.  Yeah, the market will totally work under that interpretation.  It turns the swap into an option: each party has the choice to walk away when the swap is underwater immediately after each calculation date. Since the swap will be underwater to one of the parties, this means that the swap is a non-starter.  Not a forward starting swap: a non-starting swap.

The lawyers quoted in the Risk piece attribute the court’s decision to ignorance and naivete.  Actually, what is naive is that interpretation.  Maybe the court was playing dumb, but it is pretty clear that Agroterminal had the home court advantage-literally.  That is, the court was just favoring a Russian firm at the expense of a damn furrin’ bank.  The decision is transparently silly: if one would take it literally, any floating rate claim (e.g., a floating rate bond) would be unenforceable between payment dates.  The Russian court was looking for some fig leaf to justify stiffing the Italians, and it found it.  Go figure.

The second story: Putin’s judo buddy Arkady Rotenberg has made billions on contracts for the Sochi Olympics:

Those contracts, which number at least 21, include a share of an $8.3 billion transport link between Sochi and ski resorts in the neighboring Caucasus Mountains, a $2.1 billion highway along Sochi’s Black Sea coast, a $387 million media center, and a $133 million stretch of venue-linking tarmac that will double as Russia’s first Formula One track.

Wow.  I keep kicking myself  for not taking up judo (but wouldn’t kicking be Karate-related? Whatever).  Arkady is not just the best producer of steel pipe for gas pipelines, he’s also the best builder of transportation systems, highways, media centers, and Formula One tracks.  His personal connection with Putin is no doubt totally coincidental: can he help it if he  has such varied talents?

These two stories illustrate different aspects of the Russian natural state.  The elites get fed. Kormlenie lives.   Sometimes courts do the feeding.  Sometimes the government feeds its friends, through contracts or restrictions on competition.  But the natural state rewards the connected.

And this is Russia’s curse.  This is why it is on the hamster wheel from hell.

February 2, 2013

Back to Futurization: The Consequences of Swap-O-Phobia

This week’s big derivatives story was about a public workshop at the CFTC on the issue of futurization.  I was one of the first people to comment on this issue, last year when ICE announced it was converting all of its energy swaps into energy futures.

That was Futurization 1.0. The conversion merely consisted of renaming “swaps” “futures”.  In all economic dimensions (contract specs, execution, clearing) the contracts remained identical.  But the renaming allowed some energy swap users to escape the dreaded Swap Dealer designation.

Futurization 1.1 was the CME’s conversion.  This illustrates some of the silly impacts of Frankendodd.  The CME has cleared energy swaps for going on 10 years now.  The parties would execute a swap, and submit the swap for clearing through an Exchange of Futures for Swaps (EFS) trade.  So the originally executed swap existed only until it was submitted for clearing, when it was replaced with economically equivalent futures contracts.  But no matter how short the swap’s life, the fact that it was born a swap meant that it counted towards the volume of swaps activity used to determine whether someone was a swap dealer.  So the process has now changed, and market participants use (mainly) block trades of futures to create the positions they want, thereby cutting out the interim swap step.

The transformation of the energy derivatives landscape, and the launch of swap futures by CME and Eris are now raising the question of whether futurization will spread beyond energy.

One motive to eschew swaps in favor of futures is unlikely to exist in interest rates, credit and other financial derivatives: the biggest market participants are likely to trade enough bespoke swaps for which there are no futures equivalents, meaning that they will be treated as swap dealers regardless: this reduces their incentive to substitute futures for swaps.

The main driver of futurization outside of energy will be CFTC regulatory treatment of futures and swaps.  Differential treatment will affect the economics of futures vis a vis swaps, and the prospect of such differential treatment was the center of controversy at the CFTC meeting, and in the larger debate in the industry.

The main sources of differential treatment are execution and margining.  Swaps will have to be executed subject to (allegedly) soon-to-be announced SEF rules, and are subject to immediate reporting.  Most market participants who will substitute futures for swaps will execute these deals via privately negotiated block trades subject to exchange rules.  Crucially, reporting of block futures trades is delayed 15 minutes, a concession to fears that immediate reporting would impair liquidity because block positioners (those supplying liquidity to the block futures market) could find it difficult to lay off their risk if the fact they had just done a big trade was announced immediately.  This would induce them to require larger price concessions for doing block trades.

Which raises the question: why the difference between the futures goose and the swaps gander?  Similar considerations obtain for swaps.

Thus, this rule difference is likely to favor futures executed via block trades as opposed to swaps executed via traditional means (but with immediate post-trade transparency) or via SEFs.  (Economically material differences between the rules governing block trades and SEF trades could also affect the relative costs of trading in these disparate ways, and hence affect the choice between them.)

Margining will be different for swaps and futures.  Futures will be margined assuming a one- or two-day liquidation period (i.e., margins will be based on something analogous to a one- or two-day value at risk).  However, swaps-even cleared swaps-will require a minimum liquidation period of 5 days for calculating initial margin.

This substantially higher margin for swaps results in a substantial economic disadvantage to these contracts compared to futures contracts that generate the exact same cash flows (in the absence of default).   This will likely be another factor pushing the market towards futures-which is a major reason for the fury of swaps dealers.

This differential treatment of economically equivalent contracts just because one is called a “swap” (BAD!) and the other is called a “future” (GOOD!) makes little-or no-economic sense-and reflects the swap-o-phobia that pervades DC, and which was an animating principle behind Frankendodd.

The reason to set margins based on a liquidation period reflects the purchase of IM: it is intended to cover potential losses on a defaulted position before that position can be liquidated by the clearinghouse (i.e., the position can be assumed by another solvent market participant).  The less liquid an instrument,  the longer it takes the CCP to work out of the position, the more it is exposed to potential adverse price moves and hence the more margin cover it needs.

This liquidation period depends on the economic characteristics of the contract, how widely it is traded in the marketplace, the identity, number and capitalization of the firms that trade it, and market conditions at the time of liquidation. All of these conditions should be pretty much the same for two contracts that specify the same contingent cash flows, one of which is called a swap and the other a future.  Consider say a 10 year vanilla IRS and economically equivalent futures contracts that offer the same contingent cash flows.  Abstracting from margining differences, these contracts have identical price risks, and should attract the same kinds of users.  Why should liquidity of one differ from the liquidity of the other in the event of the default of the holder of a big position?

And don’t tell me that futures are traded in light markets and swaps are traded in dark ones.  As already noted, futures traded as substitutes for swaps are and will be typically traded in blocks outside the limit order book.

More importantly: in the event of the default of a big FCM or dealer that should be the reason to be concerned, the CCP is going to try to get rid of the defaulted portfolio in big chunks through some sort of auction process.  The firm winning the auction will then trade out of the position, likely using the central market and block trades.

This isn’t much different than how a swap CCP will handle the default process.  Indeed, LCH obligates its members to assume portions of a defaulted portfolio, and one reason for having stringent CCP membership requirements is to ensure that the members have the capital and trading expertise to manage big pieces of defaulted portfolios.

We actually have a case study of futures and swap default management processes.  When Lehman defaulted, CME auctioned off its interest rate, equity, currency, commodity, and energy positions.  These trades were done at differentials from market prices, to reflect the risk that those taking over the positions would assume and have to work off.  A couple of the positions were under-margined: the firms taking over the positions required the CME to provide more funds than the Lehman margin it had against those positions.  As it turned out, the other positions were over-margined, and across all positions the over-margining exceeded the under-margining, meaning that the CME clearinghouse did not suffer any loss as the result of the default.  But this illustrates the possibility that futures can be under-margined, and that even the putatively more liquid futures contracts can require substantial price concessions to get someone to assume them.

LCH.Clearnet handled the default of the Lehman IRS positions. These totaled $9 trillion in notional-bigger than the Lehman futures positions at CME, and arguably far larger in terms of risk.  ($1 trillion in notional of a 10 year IRS poses substantially more risk than $1 trillion in notional of Eurodollar futures.)  90 percent of the position was hedged within a week.  According to LCH.Clearnet, the costs of trading out of the defaulted position were “well within” the Lehman margins it held.

It’s hard to see much of an economic difference between the CME and LCH experiences.  It doesn’t appear that it was materially more difficult to manage the default of “swaps” than it was “futures.”   The Lehman default does not provide any evidence that swap-o-phobia is anything but a mental illness.

Further, this case study illustrates that there is no reason to believe that absent government regulation, swaps will be under-margined and futures will not.  Indeed, had the Lehman positions been held at 5 separate CCPs, two of them would have been under margined.

Indeed, there are serious reasons to be concerned that government regulations of the margining of economically similar (and in some respects identical) contracts will create systemic risks, rather than mitigate them.

The margin regulations-with larger IM imposed on swaps than futures-are a form of price control, in this case default risk price control.  And we know that price controls work out swell, especially when applied by regulators in a two-sizes-fits-all fashion across all different kinds of instruments posing different risks and cleared by CCPs with potentially disparate financial strength.

Moreover, this price control will tend to induce activity to move towards futures CCPs.  Whereas in the absence of the margin differential clearing activity in, say, interest rate derivatives would be split between futures CCPs and swap CCPs, under the differential regulation, business will tend to tip to the futures CCPs-most notably, CME.  This will tend to lead to greater concentration of credit risk.

Wasn’t the whole point of Frankendodd to reduce concentration? Just asking.

The whole conceit that regulators can set risk prices is quite dangerous, and is likely to be a source of systemic risk because these prices are set on the basis of highly limited information in a politicized process, and because mistakes in setting risk prices (especially price differences between similar products) tend to lead to crowded trades and risk concentration that is highly destabilizing during crisis periods (cf. Basel II).

The case for setting different margins (default risk prices) on very similar-and in some cases identical-derivatives contracts is particularly weak.  It is the characteristics of the products and those who trade them that will drive liquidity and liquidation periods: products called “futures” that share the same economic characteristics as products called “swaps” will pose virtually the same challenges to liquidate in the event of a large default.  Given this, the economics suggest that imposing differential margins based on a name difference-rather than economic differences-can’t make things better, and could well make them worse.

But the Sorcerer’s Apprentices know better. So futures are likely to get an artificial advantage, and swaps an artificial burden. Such distortions are quite dangerous.  My conclusion? Futurize in haste, repent at leisure.

April 12, 2012

How Do You Like Them Apples?

Filed under: Economics,Regulation — The Professor @ 3:39 pm

The US DOJ has filed suit against Apple and some book publishers over their agreement to replace the existing wholesale model for ebooks with an “agency model.”  Under the old model, publishers would charge a wholesale price to a retailer, and the retailer would choose its own price.  Under the agency model, the publishers would set the retail price, and the agent/retailer (e.g., Apple) would receive a fixed margin (30 percent).

I am inherently more skeptical about antitrust actions involving vertical restrictions, although there is an element of horizontal coordination here.  Normally, you would expect that wholesalers/manufacturers would love retailers who sell for very low margins, and would resist guaranteeing a pretty fat margin, as lower margins downstream increase the derived demand at the wholesale level.  So why would publishers want higher margins downstream?  That suggests that there was some sort of inefficiency associated with the wholesale model that the agency model was intended to correct.

The agency model is, in essence, a resale price maintenance strategy.  Once upon a time these were per se illegal, but scholarship pioneered by my thesis advisor Lester Telser revolutionized understanding of this practice, and led to RPM being evaluated on a rule of reason basis.

Telser argued that free rider problems could undermine the incentives of retailers to provide information and special services that increased the demand for a wholesaler’s product: a consumer could go to a retailer that provided the information/service, then go to a cut price retailer that didn’t when actually purchasing.   Due to the discount retailer’s cut rate price, the information/service providing retailer wouldn’t get any return from supplying these (costly) services, so few or no services would be offered.  This would reduce the demand for the wholesaler’s product.  Wholesalers/manufacturers would have an incentive to guarantee a retail margin to prevent undercutting by free riders.

I don’t know enough about book publishing to opine whether resellers provide services or information that is subject to free riding.

In any event, publishers didn’t seem to be focusing on that issue.  It was Amazon’s dominant position as a retailer that seemed to be the reason for changing the model.

Time permitting, I give this some more thought.  Several questions suggest to me that this will be a challenge for the DOJ.  In particular, what were the publishers getting from Apple (and potentially other resellers adopting the model) that would make it worthwhile to guarantee the resellers a fat margin? Why didn’t the publishers love the fact that Amazon was willing to sell at very low margin?  These two questions suggest this is not a garden variety horizontal price fixing case, and that the conduct (and contracts) at issue were designed to address a free rider problem or other source of distortion.  How does Amazon’s potential monopsony power affect the analysis?  What are the implications of the high fixed cost-zero marginal cost nature of epublishing? Specifically, when combined with the potential for monopsony power downstream, could finding the agency model illegal have adverse dynamic implications (fewer books published, for instance)?

A lot of complicated issues here, and lurid tales of secretive dinners in London tell me zip about the economics.  My (rebuttable) presumption is that vertical restrictions usually have efficiency purposes.  The nature of the agreement with Apple shares similarities with other efficiency enhancing vertical restrictions, and doesn’t make a lot of sense as a means of facilitating collusion among publishers.  To persuade me (not that that matters in the scheme of things) the government would have to explain why colluding publishers would use this form of agreement, and why they would want to guarantee retailers a big margin.

I am sure the defendants’ lawyers will make every effort to make this into a vertical restrictions case instead of a price fixing case.   They seem to have considerable material to work with.

Next Page »

Powered by WordPress