Streetwise Professor

November 24, 2018

This Is What Happens When You Slip Picking Up Nickels In Front of a Steamroller

Filed under: Commodities,Derivatives,Energy,Exchanges — cpirrong @ 7:14 pm

There are times when going viral is good.  There are times it ain’t.  This is one of those ain’t times.  Being the hedgie equivalent of Jimmy Swaggert, delivering a tearful apology, is not a good look.

James Cordier ran a hedge fund that blowed up real good.   The fund’s strategy was to sell options, collect the premium, and keep fingers crossed that the markets would not move bigly.  Well, OptionSellers.com sold NG and crude options in front of major price moves, and poof! Customer money went up the spout.

Cordier refers to these price moves as “rogue waves.”  Well, as I said in my widowmaker post from last week, the natural gas market was primed for a violent move: low inventories going into the heating season made the market vulnerable to a cold snap, which duly materialized, and sent the market hurtling upwards.   The low pressure system was clearly visible on the map, and the risk of big waves was clear: a rogue wave out of the blue this wasn’t.

As for crude, the geopolitical, demand, and output (particularly Permian) risks have also been crystalizing all autumn.  Again, this was not a rogue wave.

I’m guessing that Cordier was short natural gas calls, and short crude oil puts, or straddles/strangles on these commodities.  Oopsie.

Selling options as an investment strategy is like picking up nickels in front of a steamroller.  You can make some money if you don’t slip.  If you slip, you get crushed.  Cordier slipped.

Selling options as a strategy can be appealing.  It’s not unusual to pick up quite a few nickels, and think: “Hey.  This is easy money!” Then you get complacent.  Then you get crushed.

Selling options is effectively selling insurance against large price moves.  You are rewarded with a risk premium, but that isn’t free money.  It is the reward for suffering large losses periodically.

It’s not just neophytes that get taken in.  In the months before Black Monday, floor traders on CBOE and CME thought shorting out-of-the-money, short-dated options on the S&Ps was like an annuity.  Collect the premium, watch them expire out-of-the-money, and do it again.   Then the Crash of ’87 happened, and all of the modest gains that had accumulated disappeared in a day.

Ask Mr. Cordier–and his “family”–about that.

 

Print Friendly, PDF & Email

November 17, 2018

Read Financial Journalism For the Facts, Not the Analysis

Filed under: Commodities,Derivatives,Economics,Energy — cpirrong @ 7:19 pm

One of the annoying things about journalism is its predilection to jam every story into an au courant narrative.  Case in point, this Bloomberg story attributing a fall in bulk shipping rates (as measured by the Baltic Freight Index) to the trade war.  Leading the story is the fact that iron ore and coal charter rates have fallen about 40 percent since August. The connection between these segments in particular to the trade war is hard to fathom, and the article really doesn’t try to make the case, beyond quoting a shipping industry flack.

An earlier version of the story included a few paragraphs (deleted in the version now online) about grain shipping, stating that grain charter rates had also fallen, since the decline in shipments from the US to China had depressed the rates for smaller ships.  It was not clear from the unclear writing whether the smaller ships referred to just means that smaller vessels are used to carry grain than ore or coal, or whether it means that among grain carriers, the smaller ones have been hit hardest.  If the former, it’s by no means clear that the trade war should reduce shipping rates for most grain carriers.  Indeed, by disrupting logistics through reducing shipments out of the US, Chinese restrictions on US oilseed imports has forced longer, less efficient voyages, which effectively reduces shipping supply and is bullish for rates.  If the latter, yes, it is possible that the demand for smaller ships that normally operate from the USWC to China has fallen, but this can hardly explain a fall in the Baltic Index, which is based on Capesize, Panamax, and Supramax voyages, not (as of March, 2018) of Handymax let alone Handy-sized vessels.  (Perhaps this is why the paragraphs disappeared.)

Bulk shipping rates are used as an indicator of world economic activity: Lutz Kilian pioneered the use of freight rates as a proxy for world economic conditions.  Thus, it’s more likely that the decline in the BFI is a harbinger of slowing global growth–and growth in China in particular.  There are other indications that this is happening.

Yes, the trade war may be impacting the Chinese economy, but it is more likely that it is just the icing on the cake, with the main ingredients of any Chinese decline (which is indicated by weakening asset prices and lower official GDP numbers, though those always must be taken with mines of salt) being structural and financial imbalances.

If you are going to look to freight markets for evidence of the impact of the trade war, it would be better to look at container rates, which have actually been increasing robustly while bulk rates have declined.

While I’m on the subject of pet peeves relating to journalism, another Bloomberg story comes to mind.  This one is about oil hedging:

The plunge in oil prices may finally make oil producers’ hedging contracts into a financial winner for 2018.

After more than a year of surging prices made the contracts a drag on profits, the slide in West Texas Intermediate crude to around $55 a barrel this month means some of the hedges are edging toward profitability, said Anastacia Dialynas, a Bloomberg NEF analyst.

Uhm, that’s not the point.  Just as this article misses the point:

There’s a downside to oil prices being up that could cost the industry more than $7 billion.

When crude markets slumped, explorers used hedging contracts to lock in payments for future barrels to ride out prices that fell as low as $27 a barrel in 2016. Now, as global tensions and OPEC supply cuts drive prices toward $70 in New York, those financial insurance policies have become a drag on profits, limiting some companies from cashing in on the rally.

Even the title of this week’s article is idiotic: “Hedging Bets.”  What would those be, exactly?  “Hedging bet” (as distinguished from “hedging your bets”) is pretty much an oxymoron.  If hedge is any kind of bet, it is a bet on the basis–but that’s not what these articles are talking about.  They focus on flat prices.

The point of these contracts is to reduce exposure to flat prices, and to reduce the sensitivity of revenue to price fluctuations.  The hedger gives up the upside during high price environments to pay for a cushion on the downside in low price environments.  Thus, if anything, these articles shows hedges are performing as expected.  They are in the money in low price environments, and out in high price ones, thereby offsetting the vicissitudes of revenues from oil production.

The problem with journalism regarding hedging (and these articles are just the latest installments in a large line of clueless pieces) is that it doesn’t view things holistically.  It views the derivatives in isolation, which is exactly the wrong thing to do.

Journalists are not the only ones to commit this error.  Some financial analysts hammer companies that show big accounting losses on hedge positions.  “The company would have made $XXX more if it hadn’t hedged.  Dumb management!” Er, this requires the ability to predict prices, and if you can do this, you wouldn’t be hedging–and if it’s so easy, you shouldn’t be a financial analyst, but a fabulously wealthy trader living large on a yacht that would make a Russian oligarch jealous.

Derivatives losses deserve scrutiny when they are not (approximately) offset by gains elsewhere.  This can occur if the positions are actually speculative, or when there is a big move in the basis.  In the latter case, the relevant question is whether the hedge was poorly designed, and involved more basis risk than necessary, or whether the story should be filed under “stuff happens.”

Which brings me to a recommendation regarding consumption of most financial journalism.  Look at it as a source of factual information that you can analyze using solid economics, NOT as a source of insightful analysis.  Because too many financial journalists wouldn’t know solid economics if it was dropped on them from a great height.

Print Friendly, PDF & Email

November 14, 2018

Return of the Widowmaker–The Theory of Storage in Action

Filed under: Commodities,Derivatives,Economics,Energy — cpirrong @ 7:37 pm

I’m old enough to remember when natural gas futures–and the March-April spread in particular–were known as the widowmakers.  The volatility in the flat price and especially the spread could crush you in an instant if you were caught on the wrong side of one of the big movements.

Then shale happened, and the increase in supply, and in particular the increase in the elasticity of supply, dampened flat price volatility.  The buildup in production and relatively temperate weather encouraged the buildup in inventories, which helped tame the HJ spread.  But the storage build in 2018 was well below historical averages–a 15 year low.  Add in a dash of cold weather, and the widowmaker is back, baby.

To put some numbers to it, today the March flat price was up 76 cents/mmbtu, and the HJ spread spiked 71.1 cents.  The spread settled yesterday at  $.883 and settled today at $1.594.  So for you bull spreaders–life is good.  Bear spreaders–not so much.

The March-April spread is volatile for structural reasons, notably the seasonality of demand combined with relatively inflexible output in the short run.  As I tell my students, the role of storage is to move stuff from when it’s abundant to when it’s scarce–but you can only move one direction, from the present to the future.  You can’t move from the future to the present.  Given the seasonal demand for gas it is scarce in the winter, abundant in the spring, meaning that carrying inventory from winter to spring would be moving supply from when it’s scarce to when it’s abundant.  You don’t want to do that, so the best you can do is limit what you carry over, so you don’t carry it from when it’s scarce to when it’s abundant.

Backwardation is the price signal that gives the incentive to do that: a March price above the April price tells you that you are locking in a loss by carrying inventory from March to April.   Given the seasonality in demand, the HJ spread should therefore be backwardated in most years, and indeed that’s the case.

But this has implications for the volatility in the spread, and its susceptibility to big jumps like experienced today.  Inventory is what connects prices today with prices in the future.  With it being optimal to carry little or no inventory (a “stockout”)  from winter to spring, the last winter month contract price (March) has little to connect it with the first spring contract price (April).  Thus, a transient demand shock–and weather shocks are transient (which is why the world hasn’t burned up or frozen)–during the heating season affects that season’s prices but due to the lack of an inventory connection little of that shock is communicated to spring prices.

And that’s exactly what we saw today.  Virtually all the spread action was driven by the March price move–a 76 cent move–while the April price barely budged, moving up less than a nickel.

That’s the theory of storage in action.  Spreads price constraints.  For example, Canadian crude prices are in the dumper now relative to Cushing because of the constraint on getting crude out of the frozen North.  The March-April natty spread prices the Einstein Constraint, i.e., the impossibility of time travel.  We can’t bring gas from spring 2019 to winter 2019.  Given the seasonality of demand, the best we can do is to NOT bring gas from winter 2019 to spring 2019.  Winter prices must adjust to ration the supply available before the spring (existing inventory and production through March).  The supply is relatively fixed (inventory is definitely fixed, and production is pretty much fixed over that time frame) so an increase in demand due to unexpected cold winter weather can’t be accommodated by an increase in supply, but by an increase in price.  The Einstein Constraint plus relatively inflexible production plus seasonal demand combine to make the inter-seasonal spread an SOB.

There will be a test.  Math will be involved.

Print Friendly, PDF & Email

October 18, 2018

Ticked Off About Spoofing? Consider This

Filed under: Commodities,Derivatives,Economics,Exchanges,Politics,Regulation — cpirrong @ 6:51 pm

An email from a legal academic in response to yesterday’s post spurred a few additional thoughts re spoofing.

One of my theories of spoofing is that is a way to improve one’s position in the queue at the best bid or offer.  Why does one stand in a queue?  Why does one want to be closer to the front?

Simple: because there is a rent there to capture.  Where does the rent come from?  When what you are queuing for is underpriced, likely due to some price control.  Think of gas lines, or queues for sausage in the USSR.

In market making, the rent exists because the benefit from executing at the bid or offer exceeds the cost.  The cost arises from (a) adverse selection costs, and (b) inventory cost/risk and other costs of participation.  What is the source of the price control?: the tick size.

Exchanges set a minimum price increment–the “tick.”  When the tick size exceeds the costs of making a market, there is a rent.  This makes it beneficial to increase the probability of execution of an at-the-market limit order, i.e., if the tick size exceeds the cost of executing a passive order, it pays to game to move up in the queue.  Spoofing is one way of gaming.

This has a variety of implications.

One implication is in the cross section: spoofing should be more prevalent, when the non-adverse selection component of the spread (which is measured by temporary price movements in response to trades) is large.  Relatedly, this implies that spoofing should be more likely, the more negatively autocorrelated are transaction prices, i.e., the bigger the bid-ask bounce.

Another implication is in the time series.  Adverse selection costs can vary over time.  Spoofing should be more prevalent during periods when adverse selection costs are low.  These should also be periods of unusually large negative autocorrelations in transaction prices.

Another implication is that if you want to reduce spoofing  . . .  reduce the tick size.  Given what I just discussed, tick size reductions should be focused on instruments with a bigger bid/ask bounce/larger non-adverse selection driven spread component.

That is, why police the markets and throw people in jail?  Mitigate the problem by reducing the incentive to commit the offense.

This story also has implications for the political economy of spoofing prosecution (which was the main thrust of the email I received).  HFT/algo traders who desire to capture the rent created by a tick>adverse selection cost should complain the loudest about spoofing–and are most likely to drop the dime on spoofers.  Casual empiricism supports at least the first of these predictions.

That is, as my correspondent suggested to me, not only are spoofing prosecutions driven by ambitious prosecutors looking for easy and unsympathetic targets, they generate political support from potentially politically influential firms.

One way to test this theory would be to cut tick sizes–and see who squeals the loudest.  Three guesses as to whom this might be, and the first two don’t count.

Print Friendly, PDF & Email

October 17, 2018

The Harm of a Spoof: $60 Million? More Like $10 Thousand

Filed under: Commodities,Derivatives,Economics,Exchanges,Regulation — cpirrong @ 4:08 pm

My eyes popped out when I read this statement regarding the DOJ’s recent criminal indictment (which resulted in some guilty pleas) for spoofing in the S&P 500 futures market:

Market participants that traded futures contracts in these three markets while the spoof orders distorted market prices incurred market losses of over $60 million.

$60 million in market losses–big number! For spoofing! How did they come up with that?

The answer is embarrassing, and actually rather disgusting.

The DOJ simply calculated the notional value of the contracts that were traded pursuant to the alleged spoofing scheme.  They took the S&P 500 futures price (e.g., 1804.50), multiplied that by the dollar value of a price point ($50), and multiplied that by the “approximate number of fraudulent orders placed” (e.g., 400).

So the defendants traded futures contracts with a notional value of approximately $60+ million.  For the DOJ to say that anyone “incurred market losses of over $60 million” based on this calculation is complete and utter bollocks.  Indeed, if someone touted that their trading system earned market profits of $60 million based on such a calculation in order to get business from the gullible, I daresay the DOJ and SEC would prosecute them for fraud.

This exaggeration is of a piece with the Sarao indictment, which claimed that his spoofing caused the Flash Crash.

And of course the financial press credulously regurgitated the number the DOJ put out.

I know why DOJ does this–it makes the crime look big and important, and likely matters in sentencing.  But quite frankly, it is a lie to claim that this number accurately represents in any way, shape, or form the economic harm caused by spoofing.

This gets to the entire issue of who is damaged by spoofing, and how.  Does spoofing induce someone to cross the spread and incur the bid/ask, who would otherwise not have entered an aggressive order?  Does it cause someone to cancel a limit order, and therefore lose the opportunity to trade against an aggressive order and thereby earn the spread (the realized spread, not the quoted spread, in order to account for losses to better-informed traders)?

Those are realistic theories of harm, and they imply that the economic harm per contract is on the order of a tick in a liquid market like the ES.  That is, per contract executed as a result of the spoof, the damage is .25 (the tick size) times $50 (the value of an S&P point).  That is, a whopping $12.50.  So, pace the DOJ, the ~800 “fraudulent orders placed caused economic harm of about 10,000 bucks, not 60 mil.  Maybe $20,000, under the theory that in a particular spoof, someone lost from crossing the spread, and someone else lost out on the opportunity to earn the spread.  (Though interestingly, from a social perspective, that is a transfer not a true loss.)

But $10,000 or $20,000 looks rather pathetic, compared to say $60 million, doesn’t it?  What’s three orders of magnitude between friends, eh?

Yes, maybe the DOJ just included a few episodes in the indictment, because that is sufficient for a criminal prosecution and conviction.  But even a lot more of such episodes does not add up to a lot of money.

This is precisely why I find the expenditure of substantial resources to prosecute spoofing to be so dubious.  There is other financial market wrongdoing that is far more harmful, which often escapes prosecution.  Furthermore, efficient punishment should be sized to the harm.  People pay huge fines, and go to jail–for years–for spoofing.  That punishment is hugely disproportionate to the loss, under the theory of harm that I advance here.  So spoofing is over-deterred.

Perhaps there are other theories of harm that justify the severe punishments for spoofing.  If so, I’d like to hear them–I haven’t yet.

These spoofing prosecutions appear to be a case of the drunk looking for his wallet (or a scalp) under the lamppost, because the light is better there.  In the electronic trading era, spoofing is possible–and relatively cheap to detect ex post.  So just trawl through the trading data for evidence of spoofing, and voila!–a criminal prosecution is likely to appear.  A lot easier than prosecuting market power manipulations that can cause nine and ten figure market losses.  (For an example of the DOJ’s haplessness in a prosecution of that kind of case, see US v. Radley.)

Spoofing is the kind of activity that is well within the competence of exchanges to detect and punish using their ordinary disciplinary procedures.  There’s no need to make a federal case out of it–literally.

The time should fit the crime.  The Department of Justice wildly exaggerates the crime of spoofing in order to rationalize the time.  This is inefficient, and well, just plain unjust.

Print Friendly, PDF & Email

September 26, 2018

We’re From the International Maritime Organization, and We’re Here to Help You: The Perverse Economics of New Maritime Fuel Standards

Filed under: Climate Change,Commodities,Economics,Energy,Politics,Regulation — cpirrong @ 6:26 pm

This Bloomberg piece from last month claims that the International Maritime Organization’s looming 2020 caps on sulfur emissions from ships “could lift crude prices by $4 a barrel when the measures come into effect in 2020.”

Not so fast.  It depends on what you mean by “crude.”  According to the International Oil Handbook, there are 195 different streams of crude oil.  Crucially, the sulfur content of these crudes varies from basically zero to 5.9 percent.  There is no such thing the price of “crude,” in other words.

The IMO regulation will have different impacts on different crudes.  It will no doubt cause the spread between sweet and sour crudes to widen.  This happened in 2008, when European regulation mandating low sulfur diesel kicked in: this regulation contributed to the spike in benchmark Brent and WTI prices, and wide spreads in crude prices.  During this time, (if memory serves) 10 VLCCs full of Iranian crude were swinging at anchor while WTI and Brent prices were screaming higher and sweet crude inventories were plunging precisely due to the fact that the regulation increased the demand for sweet crude and depressed demand for heavier, more sour varieties.

The IMO regulation will definitely reduce the demand for crude oil overall.   The demand for crude is derived from the demand for fuels, notably transportation fuels.  The regulation increases the cost of some transportation fuels, which decreases the (derived) demand for crude.  This change will not be distributed evenly, with demand for light, sweet crudes actually increasing, but demand for sour crudes falling, with the fall being bigger, the more sour the crude.

The regulation will hit ship operators hard, and they will pass on the higher cost to shippers.  In the short run, carriers will eat some of the cost–perhaps the bulk of it.  But the long run supply elasticity of shipping is large (arguably close to perfectly elastic), meaning after fleet size adjusts shippers will bear the brunt.

The burden will fall heaviest on commodities, for which shipping cost is large relative to value.  Therefore, farmers and miners will receive lower prices, and consumers will pay higher prices for commodity-intensive goods.  Further, this regulatory tax will be highly regressive, falling on relatively low income individuals, who pay a higher share of their income on such goods.

This seems to be a case of almost all pain, little gain.  The ostensible purpose of the regulation is to reduce pollution from sulfur emissions.  Yes, ships will produce less such emissions, but due to the joint product nature of refined petroleum, overall sulfur emissions will fall far less.

Many ships currently use “bottom of the barrel” fuel oil that tend to be higher in sulfur.  Many will achieve compliance by shifting to middle distillates.  But the bottom of the barrel won’t go away.  Over the medium to longer term, refineries will make investments that allow them to squeeze more middle distillates out of a barrel of crude, or to remove some sulfur, but inevitably refineries will produce some low-quality, high sulfur products: the sulfur has to go somewhere.  This is inherent in the joint nature of fuel production.

And yes, there will be some adjustments on the crude supply side, with the differential between sweet and sour crude favoring production of the former over the latter.   But sour crudes will be produced, and new discoveries of sour crude will be developed.

Meaning that although consumption of high sulfur fuels by ships will go down, since (a) in equilibrium consumption equals production, and (b) due to the joint nature of production the output of high sulfur fuels will go down less than its consumption by ships does, someone will consume most of the fuel oil that ships no longer used.  And since someone is consuming it, they will emit the sulfur.

The most likely (near term) use of fuel oil is for power generation.  The Saudis are planning to ramp up the use of 3.5 percent sulfur fuel oil to generate power for AC and desalinization.  Other relatively poor countries (e.g., Bangladesh, Pakistan) are also likely to have an appetite for cheap high sulfur fuel oil to generate electricity.

The ultimate result will be a regulation that basically shifts who produces the sulfur emissions, with a far smaller impact on the total amount of emissions.

This represents a tragic–and classic–example of a regulation imposed on a segment of a larger market.  The pernicious effects of such a narrow regulation are particularly acute in oil, due to the joint nature of production.

Given the efficiency and distributive effects of the IMO, it is almost certainly not a second best policy.  Indeed, it is more likely to be a second worst policy.  Or maybe a first worst policy: doing nothing at all is arguably better.

 

Print Friendly, PDF & Email

September 25, 2018

Default Is Not In Our Stars, But In Our (Power) Markets: Defaulting on Power Spread Trades Is Apparently a Thing

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Regulation — cpirrong @ 6:34 pm

Some other power traders–this time in the US–blowed up real good.   Actually preceding the Aas Nasdaq default by some months, but just getting attention in the mainstream press today, a Houston-based power trading company–GreenHat–defaulted on long-term financial transmission rights contracts in PJM.  FTRs are financial contracts that have cash-flows derived from the spread between prices at different locations in PJM.  Locational spreads in power markets arise due to transmission congestion, so FTRs can be used to hedge the risk of congestion–or to speculate on it.  FTRs are auctioned regularly.  In 2015 GreenHat bought at auction FTRs for 2018.  These positions were profitable in 2015 and 2016, but improvements in PJM transmission caused them to go underwater substantially in 2018.  In June, GreenHat defaulted, and now PJM is dealing with the mess.

The cost of doing so is still unknown.  Under PJM rules, the organization is required to liquidate defaulted positions.  However, the bids PJM received for the defaulted portfolio were 4x-6x the prevailing secondary market price, due to the size of the positions, and the illiquidity of long-term FTRs–with “long term” being pretty much anything beyond a month.  Hence, PJM has requested FERC for a waiver to the requirement for immediate liquidation, and the PJM membership has voted to suspend liquidating the defaulted positions until November 30.

PJM members are on the hook for the defaulted positions.  The positions were underwater to the tune of $110 million as of June–and presumably this was based on market prices, meaning that the cost of liquidating these positions would be multiples of that.  In other words, this blow up could put Aas to shame.

PJM operates the market on a credit system, and market participants can be required to post additional collateral.  However, long-term FTR credit is determined only on an annual basis: “In conjunction with the annual update of historical activity that is used in FTR credit requirement calculations, PJM will recalculate the credit requirement for long-term FTRs annually, and will adjust the Participant’s credit requirement accordingly. This may result in collateral calls if requirements increase.”  Credit on shorter-dated positions are calculated more frequently: what triggered the GreenHat default was a failure to make its payment on its June FTR obligation.

This event is resulting in calls for a re-examination of  PJM’s FTR credit scheme.  As well it should!  However, as the Aas episode demonstrates, it is a fraught exercise to determine the exposure in electricity spread transactions.  This is especially true for long-dated positions like the ones GreenHat bought.

The PJM episode reinforces the Aas episode’s lessons the challenges of handling defaults–especially of big positions in illiquid instruments.  Any auction is very likely to turn into a fire sale that exacerbates the losses that caused the default in the first place.  Moral of the story: mutualizing default risk (either through a CCP, or a membership organization like PJM) can impose big losses on the participants in risk pool.

The dilemma is that the instruments in question can provide valuable benefits, and that speculators can be necessary to achieve these benefits.  FTRs are important because they allow hedging of congestion risk, which can be substantial for both generation and load: locational spreads can be very volatile due to a variety of factors, including the lack of storability of power, non-convexities in generation (which can make it very costly to reduce generation behind a constraint), and generation capacity constraints and inelastic demand (which make it very costly to increase generation or reduce consumption on the other side of the constraint).  So FTRs play a valuable hedging role, and in most markets financial players are needed to absorb the risk.  But that creates the potential for default, and the very factors that make FTRs valuable hedging tools can make defaults very costly.

FTR liquidity is also challenged by the fact that unlike hedging say oil price risk or corn price risk, where a standard contract like Brent or CBT corn can provide a pretty good hedge for everyone, every pair of locations is a unique product that is not hedged effectively by an FTR based on another pair of locations.  The market is therefore inherently fragmented, which is inimical to liquidity.  This lack of liquidity is especially devastating during defaults.

So PJM (and other RTOs) faces a dilemma.  As the Nasdaq event shows, even daily marking to market and variation margining can’t prevent defaults.  Furthermore, moving to a no-credit system (like a CCP) isn’t foolproof, and is likely to be so expensive that it could seriously impair the FTR market.

We’ve seen two default examples in electricity this past summer.  They won’t be the last, due the inherent nature of electricity.

 

Print Friendly, PDF & Email

September 20, 2018

The Smoke is Starting to Clear from the Aas/Nasdaq Blowup

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — cpirrong @ 11:08 am

Amir Khwaja of Clarus has a very informative post about the Nasdaq electricity blow-up.

The most important point: Nasdaq uses SPAN to calculate IM.  SPAN was a major innovation back in the day, but it is VERY long in the tooth now (2018 is its 30th birthday!).  Moreover, the most problematic part of SPAN is the ad hoc way it handles dependence risk:

  • Intra-commodity spreading parameters – rates and rules for evaluating risk among portfolios of closely related products, for example products with particular patterns of calendar spreads
  • Inter-commodity spreading parameters – rates and rules for evaluating risk offsets between related product

…..

CME SPAN Methodology Combined Commodity Evaluations

The CME SPAN methodology divides the instruments in each portfolio into groupings called combined commodities. Each combined commodity represents all instruments on the same ultimate underlying – for example, all futures and all options ultimately related to the S&P 500 index.

For each combined commodity in the portfolio, the CME SPAN methodology evaluates the risk factors described above, and then takes the sum of the scan risk, the intra-commodity spread charge, and the delivery risk, before subtracting the inter-commodity spread credit. The CME SPAN methodology next compares the resulting value with the short option minimum; whichever value is larger is called the CME SPAN methodology risk requirement. The resulting values across the portfolio are then converted to a common currency and summed to yield the total risk for the portfolio.

I would not be surprised if the handling of Nordic-German spread risk was woefully inadequate to capture the true risk exposure.  Electricity spreads are strange beasts, and “rules for evaluating risk offsets” are unlikely to capture this strangeness correctly especially given the fact that electricity markets have idiosyncrasies that one-size-fits all rules are unlikely to capture.  I also conjecture that Aas knew this, and loaded the boat with this spread trade because he knew that the risk was grossly underpriced.

There are reports that the Nasdaq margin breach at the time of default (based on mark-to-market prices) was not nearly as large as the €140 million hit to the default fund.  In these accounts, the bulk of the hit was due to the fact that the price at which Aas’ portfolio was auctioned off included a substantial haircut to prevailing market prices.

Back in the day, I argued that one of the real advantages to central clearing was a more orderly handling of defaulted portfolios than the devil-take-the-hindmost process in OTC bilateral markets (cf., the outcome of the LTCM disaster almost exactly 20 years ago–with the Fed midwifed deal being completed on 23 September, 1998). (Ironically spread trades were the cause of LTCM’s demise too.)

But the devil is in the details of the auction, and in market conditions at the time of the default–which are almost certainly unsettled, hence the default.  The CME was criticized for its auction of the defaulted Lehman positions: the bankruptcy trustee argued that the price CME obtained was too low, thereby harming the creditors.   The sell-off of the Amaranth NG positions in September, 2006 (what is it about September?!?) to JP Morgan and Citadel (if memory serves) was also at a huge discount.

Nasdaq has been criticized for allowing only 4 firms to bid: narrow participation was also the criticism leveled at CME and NYMEX clearing in the Lehman and Amaranth episodes, respectively.  Nasdaq argues that telling the world could have sparked panic.

But this episode, like Lehman and Amaranth before it, demonstrate the challenges to auctioning big positions.  Only a small number of market participants are likely to have the capital, or the risk appetite, to take on a big defaulted position in its entirety.  Thus, limited participation is almost inevitable, and even if Nasdaq had invited more bidders, there is room to doubt whether the fifth or sixth or seventh bidder would have been able to compete seriously with the four who actually participated.  Those who have the capital and risk appetite to bid seriously for big positions will almost certainly demand a big discount to  compensate for the risk of holding the position until they can work it off.  Moreover, limited participation limits competition, which should exacerbate the underpricing problem.

Thus, even with a structured auction process, disposing of a big defaulted portfolio is almost inevitably something of a fire sale.  This is a risk borne by the participants in the default fund.  Although the exposure via the default fund is sometimes argued to be an incentive for the default fund participants to bid aggressively, this is unlikely because there are externalities: the aggressive bidder bears all the risks and costs, and provides benefits to the rest of the other members.  Free riding is a big problem.

In theory, equitizing the risk might improve outcomes.  By selling shares in the defaulted portfolio, no single or two bidders would have to absorb the entire position and risk could be spread more efficiently: this could reduce the risk discount in the price.  But who would manage the portfolio?  What are the mechanics of contributing to IM and VM?  Would it be like a bad bank, existing as a zombie until the positions rolled off?

Another follow-up from my previous post relates to the issue of self-clearing.  On Twitter and elsewhere, some have suggested that clearing through a 3d party would have been an additional check.  Surely an FCM would be less likely to fall in love with a position than the trader who puts it on, but the effectiveness of the FCM as a check depends on its evaluation of risk, and it may be no smarter than the CCP that sets margins.   Furthermore, there are examples of FCMs having the same trade in their house account as one of their big customers–perhaps because they think the client is really smart and they want to free ride off his genius.  As a historical example, Griffin Trading had a big trade in the same instrument and direction as its biggest client.  The trade went pear-shaped, the client defaulted, and Griffin did too.

I also need to look to see whether Nasdaq Commodities uses the US futures clearing model, which does not segregate positions.  If it does, and if Aas had cleared through an FCM, it is possible that the FCM’s clients could have lost money as a result of his default.  This model has fellow-customer risk: by clearing for himself, Aas did not create such a risk.

I also note that the desire to expand clearing post-Crisis has made it difficult and more costly for firms to find FCMs.  This problem has been exacerbated by the Supplementary Leverage Ratio.  Perhaps the cost of clearing through an FCM appeared excessive to Aas, relative to the alternative of self-clearing.  Thus, if regulators blanch at the thought of self-clearing (not saying that they should), they should get serious about addressing the FCM cost issue, and regulations that inflate these costs but generate little offsetting benefit.

Again, this episode should spark (no pun intended!) a more thorough reconsideration of clearing generally.  The inherent limitations of margin models, especially for more complex products or markets.  The adverse selection problems that crude risk models can create.  The challenges of auctioning defaulted portfolios, and the likelihood that the auctions will become fire sales.  The FCM capacity issue.

The supersizing of clearing in the post-Crisis world has also supersized all of these concerns.  The Aas blowup demonstrates all of them.  Will CCPs and regulators take heed? Or will some future September bring us the mother of all blowups?

Print Friendly, PDF & Email

September 18, 2018

He Blowed Up Real Good. And Inflicted Some Collateral Damage to Boot

I’m on my way back from my annual teaching sojourn in Geneva, plus a day in the Netherlands for a speaking engagement.  While I was taking that European non-quite-vacation, a Norwegian power trader, Einar Aas, suffered a massive loss in cleared spread trades between Nordic and German electricity.  The loss was so large that it blew through Aas’ initial margin and default fund contribution to the clearinghouse (Nasdaq), consumed Nasdaq’s €7 million capital contribution to the default fund, and €107 million of the rest of the default fund–a mere 66 percent of the fund.  The members have been ordered to contribute €100 million to top up the fund.

This was bound to happen. In a way, it was good that it happened in a relatively small market.  But it provides a sobering demonstration of what I’ve said for years: clearing doesn’t eliminate losses, but affects the distribution of losses.  Further, financial institutions that back CCPs–the members–are the ultimate backstops.  Thus, clearing does not eliminate contagion or interconnections in the financial network: it just changes the topology of the network, and the channels by which losses can hit the balance sheets of big players.

Happening in the Nordic/European power markets, this is an interesting curiosity.  If it happens in the interest rate or equity markets, it could be a disaster.

We actually know very little about what happened, beyond the broad details.  We know Aas was long Nordic power and short German power, and that the spread widened due to wet weather in Norway (which depresses the price of hydro and reduces demand) and an increase in European prices due to increases in CO2 prices.  But Nasdaq trades daily, weekly, monthly, quarterly, and annual power products: we don’t know which blew up Aas.  Daily spreads are more volatile, and exhibit more extremes (kurtosis), but since margins are scaled to risk (at least theoretically–more on this below) what matters is the market move relative to the estimated risk.  Reports indicate that the spread moved 17x the typical move, but we don’t know what measure of “typical” is used here.  Standard deviation?  Not a very good measure when there is a lot of kurtosis (or skewness).

I also haven’t seen how big Aas’ initial margins were.  The total loss he suffered was bigger than the hit taken by the default fund, because under the loser-pays model, the initial margins would have been in the first loss position.

The big question in my mind relates to Nasdaq’s margin model.  Power price distributions deviate substantially from the Gaussian, and estimating those distributions is challenging in part because they are also conditional on day of the year and hour of the day, and on fundamental supply-demand conditions: one model doesn’t fit every day, every hour, every season, or every weather enviornment.  Moreover, a spread trade has correlation risk–dependence risk would be a better word, given that correlation is a linear measure of dependence and dependencies in power prices are not linear.  How did Nasdaq model this dependence and how did that impact margins?

One possibility is that Nasdaq’s risk/margin model was good, but this was just one of those things.  Margins are set on the basis of the tails, and tail events occur with some probability.

Given the nature of the tails in power prices (and spreads) reliance on a VaR-type model would be especially dangerous here.  Setting margin based on something like expected shortfall would likely be superior here.  Which model does Nasdaq use?

I can also see the possibility that Nasdaq’s margin model was faulty, and that Aas had figured this out.  He then put on trades that he knew were undermargined because Nasdaq’s model was defective, which allowed him to take on more risk than Nasdaq intended.

In my early work on clearing I indicted that this adverse selection problem was a concern in clearing, and would lead CCPs–and those who believe that CCPs make the financial system safer–to underestimate risk and be falsely complacent.  Indeed, I argued that one reason clearing could be a bad idea is that it was more vulnerable to adverse selection problems because the need to model the distribution of gains/losses on cleared positions requires detailed knowledge, especially for more exotic products.  Traders who specialize in these products are likely to have MUCH better understanding about risks than a non-specialist CCP.

Aas cleared for himself, and this has caused some to get the vapors and conclude that Nasdaq was negligent in allowing him to do so.  Self-clearing is just an FCM with a house account, but with no client business: in some respects that’s less risky than a traditional FCM with client business as well as its own trading book.

Nasdaq required Aas to have €70 million in capital to self-clear.  Presumably Nasdaq will get some of that capital in an insolvency proceeding, and use it to repay default fund members–meaning that the €114 million loss is likely an overestimate of the ultimate cost borne by Nasdaq and the clearing members.

Further, that’s probably similar to the amount of capital that an FCM would have had to have to carry a client position as big as Aas’.   That’s not inherently more risky (to the clearinghouse and its default fund) than if Aas had cleared through another firm (or firms).  Again, the issue is whether Nasdaq is assessing risks accurately so as to allow it to set clearing member capital appropriately.

But the point is that Aas had to have skin in the game to self-clear, just as an FCM would have had to clear for him.

Holding Aas’ positions constant, whether he cleared himself or through an FCM really only affected the distribution of losses, but not the magnitude.  If Aas had cleared through someone else, that someone else’s capital would have taken the hit, and the default fund would have been at risk only if that FCM had defaulted.  But the total loss suffered by FCMs would have been exactly the same, just distributed more unevenly.

Indeed, the more even distribution that occurred due to mutualization which spread the default loss among multiple FCMs might actually be preferable to having one FCM bear the brunt.

The real issue here is incentives.  My statement was that holding Aas’ positions constant, who he cleared through or whether he cleared at all affected only the distribution of losses.  Perhaps under different structures Aas might not have been able to take on this much risk.  But that’s an open question.

If he had cleared through another FCM, that FCM would have had an incentive to limit its positions because its capital was at risk.  But Aas’ capital was at risk–he had skin in the game too, and this was necessary for him to self-clear.  It’s by no means obvious that an FCM would have arrived at a different conclusion than Aas, and decided that his position represented a reasonable risk to its capital.

Here again a key issue is information asymmetry: would the FCM know more about the risk of Aas’ position, or less?  Given Aas’ allegedly obsessive behavior, and his long-time success as a trader, I’m pretty sure that Aas knew more about the risk than any FCM would have, and that requiring him to clear through another firm would not have necessarily constrained his position.  He would have also had an incentive to put his business at the dumbest FCM.

Another incentive issue is Nasdaq’s skin in the game–an issue that has exercised FCMs generally, not just on Nasdaq.  The exchange’s/CCP’s relatively thin contribution to the default fund arguably reduces its incentive to get its margin model right.  Evaluating whether Nasdaq’s relatively minor exposure to default risk led it to undermargin requires a more thorough analysis of its margin model, which is a very complex exercise which is impossible to do given what we know about the model.

But this all brings me back to themes I flogged to the collective shrug of many–indeed almost all–of the regulatory and legislative community back in the aftermath of the Crisis, when clearing was the silver bullet for future crises.   Clearing is all about the allocation and pricing of counterparty credit risk.  Evaluation of counterparty credit risk in a derivatives context requires a detailed understanding of the price risks of the cleared products, and dependencies between these price risks and the balance sheet risks of participants in cleared markets.  Classic information problems–adverse selection and moral hazard (too little skin in the game)–make risk sharing costly, and can lead to the mispricing of risk.

The forensics about Aas blowing up real good, and the lessons learned from that experience, should focus on those issues.  Alas, I see little recognition of that in the media coverage of the episode, and betting on form, I would wager that the same is true of regulators as well.

The Aas blow up should be a salutary lesson in how clearing really works, what it can do, and what it can’t.   Cynic that I am, I’m guessing that it won’t be.  And if I’m right, the next time could be far, far worse.

Print Friendly, PDF & Email

September 5, 2018

Nothing New Under the Sun, Ag Processing and Trading Edition

Filed under: Commodities,Economics,Politics,Regulation — cpirrong @ 2:30 pm

New Jersey senator Corey Booker has introduced legislation to impose “a temporary moratorium on mergers and acquisitions between large farm, food, and grocery companies, and establish a commission to strengthen antitrust enforcement in the agribusiness industry.”  Booker frets about concentration in the industry, noting that the four-firm concentration ratios in pork processing, beef processing, soybean crushing, and wet corn milling are upwards of 70 percent, and four major firms “control” 90 percent of the world grain trade.

My first reaction is: where has Booker been all these years?  This is hardly a new phenomenon.  Exactly a century ago–starting in 1918–in response to concerns about, well, concentration in the meat-packing industry, the Federal Trade Commission published a massive 6 volume study of the industry  The main theme was that the industry was controlled by five major firms.  A representative subject heading in this work is “[m]ethods of the five packers in controlling the meat-packing industry.”  “The five packers” is a recurring refrain.

The consolidation of the packing industry in the United States in the late-19th and early-20th centuries was a direct result of the communications revolution, notably the development of railroads and refrigeration technology that permitted the exploitation of economies of scale in packing.   The industry was not just concentrated in the sense of having a relatively small number of firms–it was geographically concentrated as well, with Chicago assuming a dominant role in the 1870s and later, largely supplanting earlier packing centers like Cincinnati (which at one time was referred to as “Porkopolis”).

In other words, concentration in meat-packing has been the rule for well over a century, and reflects economies of scale.

Personal aside: as a PhD student at Chicago, I was a beneficiary of the legacy of the packing kings of Chicago: I was the Oscar Mayer Fellow, and the fellowship paid my tuition and stipend.  My main regret: I never had a chance to drive the Weinermobile (which should have been a perk!).  My main source of relief: I never had to sing an adaption of the Oscar Mayer Weiner Song: “Oh I wish I were an Oscar Mayer Fellow, that’s what I really want to be.”

Back to the subject at hand!

Booker also frets about vertical integration, and this is indeed a difference between the 2018 meat industry and the 1918 version: as the Union Stockyards in Chicago attested–by the smell, if nothing else–the big packers did not operate their own feedlots, but bought livestock raised in the country and shipped to Chicago for processing.

I am a skeptic about market power-based explanations of vertical integration, and there is no robust economic theory that demonstrates that vertical integration is anti-competitive.  The models that show how vertical integration can be used to reduce competition tend to be highly stylized toys dependent on rather special assumptions, and hence are very fragile and don’t really shed much light on the phenomenon.

Transactions cost-based theories are much more plausible and empirically successful, and I would imagine that vertical integration in meat packing is driven by TCE considerations.  I haven’t delved into the subject, but I would guess that vertical integration enhances quality control and monitoring, and reduces the asymmetric information problems that are present in spot transactions, where a grower has better information about the quality of the cattle, and the care, feeding, and growing conditions than a buyer.

I’d also note that some of the other industries Booker mentions–notably bean and corn processing–have not seen upstream integration at all.

This variation in integration across different types of commodities suggests that transactional differences result in different organizational responses.  Grain and livestock are very different, and these likely give rise to different transactions costs for market vs. non-market transactions in the two sectors.  It is difficult to see how the potential for monopsony power differs across these sectors.

Insofar as the major grain traders are concerned, again–this is hardly news.  It was hardly news 40 years ago when Dan Morgan wrote Merchants of Grain.

Furthermore, Booker’s concerns seem rather quaint in light of the contraction of merchant margins, about which I’ve written a few posts.  Ironically, as my most recent ABCD post noted, downstream vertical integration by farmers into storage and logistics is a major driver of this trend.

To the extent that consolidation is in play in grains (and also in softs, such as sugar), it is a reflection of the industry’s travails, rather than driven by a drive to monopolize the industry.  Consolidation through merger is a time-tested method for squeezing out excess capacity in a static or declining industry.

Booker’s bill almost certainly has no chance of passage.  But it does reflect a common mindset in DC.  This is a mindset that is driven by simplistic understandings of the drivers of industrial structure, and is especially untainted by any familiarity with transactions cost economics and what it has to say about vertical integration.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress