Streetwise Professor

December 5, 2018

Judge Sullivan Channels SWP, and Vindicates Don Wilson and DRW

Filed under: Derivatives,Economics,Exchanges,Regulation — cpirrong @ 10:52 am

After two years of waiting after a trial, and five years since the filing of a complaint accusing them of manipulation, Don Wilson and his firm DRW have been smashingly vindicated by the decision of Judge Richard J. Sullivan (now on the 2nd Circuit Court of Appeals).

Since it’s been so long, and you have probably forgotten, the CFTC accused DRW and Wilson of manipulating IDEX swap futures by entering large numbers (well over 1000) of orders to buy the contract during the 15 minute window used to determine the daily settlement price.  These bids were an input into the settlement price determination, and the CFTC claimed that they were manipulative, and intended to “bang the close.”  The bids were above the contemporaneous prices in the OTC swap market.

The Defendants claimed that the bids were completely legitimate, and that they hoped that they would be executed because the contract was mispriced because of a fundamental difference between a cleared, marked-to-market, daily-margined futures contract and an uncleared swap.  The former has a “convexity bias” and the latter doesn’t.  DRW did some IDEX deals with MF Global and Jefferies at rates close to the OTC swap rate, which it thought were an arbitrage opportunity, and they wanted to do more.  And, of course, they  received margin inflows to the extent that the contract settlement price reflected the convexity effect: thus, to the extent that the bids moved the settlement price in that direction, they expedited the realization of the arbitrage profit.

Here was my take in September, 2013:

Basically, there’s an advantage to being short the futures compared to being short the swap.  If interest rates go up, the short futures position profits, and the short can invest the resulting variation margin inflow at the higher interest rate.  If interest rates go down, the short futures position loses, but the short can borrow to cover the margin call at a low interest rate.  The  swap short can’t play this game because the OTC swap is not marked-to-market.  This advantage of being short the future should lead to a difference between the futures yield and the swap yield.

DRW recognized this difference between the swap and the futures.  Hence, it did not enter quotes into the futures market that were equal to swap yields.  It entered quotes at a differential to the swap rate, to reflect the convexity adjustment.  IDC used these bids to determine the settlement price, and hence daily variation margin payments.  Thus, the settlement prices reflected the convexity adjustment.  Not 100 percent, because DRW was trying to make money arbing the market.  But the settlement prices were closer to fair value as a result of DRW’s quotes than they would have been otherwise.

CFTC apparently believes that the swap futures and the swaps are equivalent, and hence DRW should have been entering quotes equal to swap yields.  By entering quotes that differed from swap rates, DRW was distorting the settlement price, in the CFTC’s mind anyways.

Put prosaically, in a way that Gary Gensler (the lover of apple analogies) can understand, CFTC is alleging that apples and oranges are the same, and that if you bid or offer apples at a price different than the market price for oranges, you are manipulating.

Seriously.

The reality, of course, is that apples and oranges are different, and that it would be stupid, and perhaps manipulative, to quote apples at the market price for oranges.

Here’s Judge Sullivan’s analysis:

[t]here can be no dispute that a cleared interest rate swap contract is economically distinguishable from, and therefore not equivalent to, an uncleared interest rate swap, even when the two contracts otherwise have the same price point, duration, and notional amount.  Put another way, because there is some additional value to the long party . . . in a cleared swap that does not exist in an uncleared swap, the economic value of the two contracts are distinct.

Pretty much the same, but without the snark.

But Judge Sullivan’s ruling was not snark-free!  To the contrary:

It is not illegal to be smarter than your counterparties in a swap transaction, nor is it improper to understand a financial product better than the people who invented that product.

I also wrote:

In other words, DRW contributed to convergence of the settlement price to fair value relative to swaps.  Manipulative acts cause a divergence between the settlement price and fair value.

. . . .

In a sane world-or at least, in a world with a sane CFTC (an alternative universe, I know)-what DRW did would be called “arbitrage” and “contributing to price discovery and price efficiency.”

Judge Sullivan agreed: “Put simply, Defendants’ explanation of their bidding practices as contributing to price discovery in an illiquid market makes sense.”

Judge Sullivan also excoriated the CFTC and lambasted its case.  He blasted it for trying to read the artificial price element out of manipulation law (“artificial price” being one of four elements established in several cases, including inter alia Cargill v. Hardin, and more recently in the 2nd Circuit, in Amaranth–a case that was an expert in).  Relatedly, he slammed it for conflating intent and artificiality.  All of these criticisms were justified.

It is something of a mystery as to why the CFTC chose this case to make its stand on manipulation.  As I noted even before it was formally filed (my post was in response to DRW’s motion to enjoin the CFTC from filing a complaint) the case was fundamentally flawed–and that’s putting it kindly.  It was doomed to fail, but the CFTC pursued it with Ahab-like zeal, and pretty much suffered the same ignominious fate.

What will be the follow-on effects of this?  Well, for one thing, I wonder whether this will get the CFTC to re-think its taking manipulation cases to Federal court, rather than adjudicating them internally in front of agency ALJs.  For another, I wonder if this will make the CFTC more gun-shy at bringing major manipulation actions–even solid ones.  Losing a bad case should not be a deterrent in bringing good ones, but the spanking that Judge Sullivan delivered is likely to lead CFTC Enforcement–and the Commission–quite chary of running the risk of another one any time soon.  And since enforcement officials are strongly incentivized to, well, enforce, they will direct their energies elsewhere.  I would therefore not be surprised to see yet a further uptick in spoofing actions, an area where the Commission has been more successful.

In sum, the wheels of justice indeed ground slowly in this case, but in the end justice was done.  Don Wilson and DRW did nothing wrong, and the person who matters–Judge Sullivan–saw that and his decision demonstrates it clearly.

Print Friendly, PDF & Email

November 24, 2018

This Is What Happens When You Slip Picking Up Nickels In Front of a Steamroller

Filed under: Commodities,Derivatives,Energy,Exchanges — cpirrong @ 7:14 pm

There are times when going viral is good.  There are times it ain’t.  This is one of those ain’t times.  Being the hedgie equivalent of Jimmy Swaggert, delivering a tearful apology, is not a good look.

James Cordier ran a hedge fund that blowed up real good.   The fund’s strategy was to sell options, collect the premium, and keep fingers crossed that the markets would not move bigly.  Well, OptionSellers.com sold NG and crude options in front of major price moves, and poof! Customer money went up the spout.

Cordier refers to these price moves as “rogue waves.”  Well, as I said in my widowmaker post from last week, the natural gas market was primed for a violent move: low inventories going into the heating season made the market vulnerable to a cold snap, which duly materialized, and sent the market hurtling upwards.   The low pressure system was clearly visible on the map, and the risk of big waves was clear: a rogue wave out of the blue this wasn’t.

As for crude, the geopolitical, demand, and output (particularly Permian) risks have also been crystalizing all autumn.  Again, this was not a rogue wave.

I’m guessing that Cordier was short natural gas calls, and short crude oil puts, or straddles/strangles on these commodities.  Oopsie.

Selling options as an investment strategy is like picking up nickels in front of a steamroller.  You can make some money if you don’t slip.  If you slip, you get crushed.  Cordier slipped.

Selling options as a strategy can be appealing.  It’s not unusual to pick up quite a few nickels, and think: “Hey.  This is easy money!” Then you get complacent.  Then you get crushed.

Selling options is effectively selling insurance against large price moves.  You are rewarded with a risk premium, but that isn’t free money.  It is the reward for suffering large losses periodically.

It’s not just neophytes that get taken in.  In the months before Black Monday, floor traders on CBOE and CME thought shorting out-of-the-money, short-dated options on the S&Ps was like an annuity.  Collect the premium, watch them expire out-of-the-money, and do it again.   Then the Crash of ’87 happened, and all of the modest gains that had accumulated disappeared in a day.

Ask Mr. Cordier–and his “family”–about that.

 

Print Friendly, PDF & Email

November 17, 2018

Read Financial Journalism For the Facts, Not the Analysis

Filed under: Commodities,Derivatives,Economics,Energy — cpirrong @ 7:19 pm

One of the annoying things about journalism is its predilection to jam every story into an au courant narrative.  Case in point, this Bloomberg story attributing a fall in bulk shipping rates (as measured by the Baltic Freight Index) to the trade war.  Leading the story is the fact that iron ore and coal charter rates have fallen about 40 percent since August. The connection between these segments in particular to the trade war is hard to fathom, and the article really doesn’t try to make the case, beyond quoting a shipping industry flack.

An earlier version of the story included a few paragraphs (deleted in the version now online) about grain shipping, stating that grain charter rates had also fallen, since the decline in shipments from the US to China had depressed the rates for smaller ships.  It was not clear from the unclear writing whether the smaller ships referred to just means that smaller vessels are used to carry grain than ore or coal, or whether it means that among grain carriers, the smaller ones have been hit hardest.  If the former, it’s by no means clear that the trade war should reduce shipping rates for most grain carriers.  Indeed, by disrupting logistics through reducing shipments out of the US, Chinese restrictions on US oilseed imports has forced longer, less efficient voyages, which effectively reduces shipping supply and is bullish for rates.  If the latter, yes, it is possible that the demand for smaller ships that normally operate from the USWC to China has fallen, but this can hardly explain a fall in the Baltic Index, which is based on Capesize, Panamax, and Supramax voyages, not (as of March, 2018) of Handymax let alone Handy-sized vessels.  (Perhaps this is why the paragraphs disappeared.)

Bulk shipping rates are used as an indicator of world economic activity: Lutz Kilian pioneered the use of freight rates as a proxy for world economic conditions.  Thus, it’s more likely that the decline in the BFI is a harbinger of slowing global growth–and growth in China in particular.  There are other indications that this is happening.

Yes, the trade war may be impacting the Chinese economy, but it is more likely that it is just the icing on the cake, with the main ingredients of any Chinese decline (which is indicated by weakening asset prices and lower official GDP numbers, though those always must be taken with mines of salt) being structural and financial imbalances.

If you are going to look to freight markets for evidence of the impact of the trade war, it would be better to look at container rates, which have actually been increasing robustly while bulk rates have declined.

While I’m on the subject of pet peeves relating to journalism, another Bloomberg story comes to mind.  This one is about oil hedging:

The plunge in oil prices may finally make oil producers’ hedging contracts into a financial winner for 2018.

After more than a year of surging prices made the contracts a drag on profits, the slide in West Texas Intermediate crude to around $55 a barrel this month means some of the hedges are edging toward profitability, said Anastacia Dialynas, a Bloomberg NEF analyst.

Uhm, that’s not the point.  Just as this article misses the point:

There’s a downside to oil prices being up that could cost the industry more than $7 billion.

When crude markets slumped, explorers used hedging contracts to lock in payments for future barrels to ride out prices that fell as low as $27 a barrel in 2016. Now, as global tensions and OPEC supply cuts drive prices toward $70 in New York, those financial insurance policies have become a drag on profits, limiting some companies from cashing in on the rally.

Even the title of this week’s article is idiotic: “Hedging Bets.”  What would those be, exactly?  “Hedging bet” (as distinguished from “hedging your bets”) is pretty much an oxymoron.  If hedge is any kind of bet, it is a bet on the basis–but that’s not what these articles are talking about.  They focus on flat prices.

The point of these contracts is to reduce exposure to flat prices, and to reduce the sensitivity of revenue to price fluctuations.  The hedger gives up the upside during high price environments to pay for a cushion on the downside in low price environments.  Thus, if anything, these articles shows hedges are performing as expected.  They are in the money in low price environments, and out in high price ones, thereby offsetting the vicissitudes of revenues from oil production.

The problem with journalism regarding hedging (and these articles are just the latest installments in a large line of clueless pieces) is that it doesn’t view things holistically.  It views the derivatives in isolation, which is exactly the wrong thing to do.

Journalists are not the only ones to commit this error.  Some financial analysts hammer companies that show big accounting losses on hedge positions.  “The company would have made $XXX more if it hadn’t hedged.  Dumb management!” Er, this requires the ability to predict prices, and if you can do this, you wouldn’t be hedging–and if it’s so easy, you shouldn’t be a financial analyst, but a fabulously wealthy trader living large on a yacht that would make a Russian oligarch jealous.

Derivatives losses deserve scrutiny when they are not (approximately) offset by gains elsewhere.  This can occur if the positions are actually speculative, or when there is a big move in the basis.  In the latter case, the relevant question is whether the hedge was poorly designed, and involved more basis risk than necessary, or whether the story should be filed under “stuff happens.”

Which brings me to a recommendation regarding consumption of most financial journalism.  Look at it as a source of factual information that you can analyze using solid economics, NOT as a source of insightful analysis.  Because too many financial journalists wouldn’t know solid economics if it was dropped on them from a great height.

Print Friendly, PDF & Email

November 14, 2018

Return of the Widowmaker–The Theory of Storage in Action

Filed under: Commodities,Derivatives,Economics,Energy — cpirrong @ 7:37 pm

I’m old enough to remember when natural gas futures–and the March-April spread in particular–were known as the widowmakers.  The volatility in the flat price and especially the spread could crush you in an instant if you were caught on the wrong side of one of the big movements.

Then shale happened, and the increase in supply, and in particular the increase in the elasticity of supply, dampened flat price volatility.  The buildup in production and relatively temperate weather encouraged the buildup in inventories, which helped tame the HJ spread.  But the storage build in 2018 was well below historical averages–a 15 year low.  Add in a dash of cold weather, and the widowmaker is back, baby.

To put some numbers to it, today the March flat price was up 76 cents/mmbtu, and the HJ spread spiked 71.1 cents.  The spread settled yesterday at  $.883 and settled today at $1.594.  So for you bull spreaders–life is good.  Bear spreaders–not so much.

The March-April spread is volatile for structural reasons, notably the seasonality of demand combined with relatively inflexible output in the short run.  As I tell my students, the role of storage is to move stuff from when it’s abundant to when it’s scarce–but you can only move one direction, from the present to the future.  You can’t move from the future to the present.  Given the seasonal demand for gas it is scarce in the winter, abundant in the spring, meaning that carrying inventory from winter to spring would be moving supply from when it’s scarce to when it’s abundant.  You don’t want to do that, so the best you can do is limit what you carry over, so you don’t carry it from when it’s scarce to when it’s abundant.

Backwardation is the price signal that gives the incentive to do that: a March price above the April price tells you that you are locking in a loss by carrying inventory from March to April.   Given the seasonality in demand, the HJ spread should therefore be backwardated in most years, and indeed that’s the case.

But this has implications for the volatility in the spread, and its susceptibility to big jumps like experienced today.  Inventory is what connects prices today with prices in the future.  With it being optimal to carry little or no inventory (a “stockout”)  from winter to spring, the last winter month contract price (March) has little to connect it with the first spring contract price (April).  Thus, a transient demand shock–and weather shocks are transient (which is why the world hasn’t burned up or frozen)–during the heating season affects that season’s prices but due to the lack of an inventory connection little of that shock is communicated to spring prices.

And that’s exactly what we saw today.  Virtually all the spread action was driven by the March price move–a 76 cent move–while the April price barely budged, moving up less than a nickel.

That’s the theory of storage in action.  Spreads price constraints.  For example, Canadian crude prices are in the dumper now relative to Cushing because of the constraint on getting crude out of the frozen North.  The March-April natty spread prices the Einstein Constraint, i.e., the impossibility of time travel.  We can’t bring gas from spring 2019 to winter 2019.  Given the seasonality of demand, the best we can do is to NOT bring gas from winter 2019 to spring 2019.  Winter prices must adjust to ration the supply available before the spring (existing inventory and production through March).  The supply is relatively fixed (inventory is definitely fixed, and production is pretty much fixed over that time frame) so an increase in demand due to unexpected cold winter weather can’t be accommodated by an increase in supply, but by an increase in price.  The Einstein Constraint plus relatively inflexible production plus seasonal demand combine to make the inter-seasonal spread an SOB.

There will be a test.  Math will be involved.

Print Friendly, PDF & Email

October 18, 2018

Ticked Off About Spoofing? Consider This

Filed under: Commodities,Derivatives,Economics,Exchanges,Politics,Regulation — cpirrong @ 6:51 pm

An email from a legal academic in response to yesterday’s post spurred a few additional thoughts re spoofing.

One of my theories of spoofing is that is a way to improve one’s position in the queue at the best bid or offer.  Why does one stand in a queue?  Why does one want to be closer to the front?

Simple: because there is a rent there to capture.  Where does the rent come from?  When what you are queuing for is underpriced, likely due to some price control.  Think of gas lines, or queues for sausage in the USSR.

In market making, the rent exists because the benefit from executing at the bid or offer exceeds the cost.  The cost arises from (a) adverse selection costs, and (b) inventory cost/risk and other costs of participation.  What is the source of the price control?: the tick size.

Exchanges set a minimum price increment–the “tick.”  When the tick size exceeds the costs of making a market, there is a rent.  This makes it beneficial to increase the probability of execution of an at-the-market limit order, i.e., if the tick size exceeds the cost of executing a passive order, it pays to game to move up in the queue.  Spoofing is one way of gaming.

This has a variety of implications.

One implication is in the cross section: spoofing should be more prevalent, when the non-adverse selection component of the spread (which is measured by temporary price movements in response to trades) is large.  Relatedly, this implies that spoofing should be more likely, the more negatively autocorrelated are transaction prices, i.e., the bigger the bid-ask bounce.

Another implication is in the time series.  Adverse selection costs can vary over time.  Spoofing should be more prevalent during periods when adverse selection costs are low.  These should also be periods of unusually large negative autocorrelations in transaction prices.

Another implication is that if you want to reduce spoofing  . . .  reduce the tick size.  Given what I just discussed, tick size reductions should be focused on instruments with a bigger bid/ask bounce/larger non-adverse selection driven spread component.

That is, why police the markets and throw people in jail?  Mitigate the problem by reducing the incentive to commit the offense.

This story also has implications for the political economy of spoofing prosecution (which was the main thrust of the email I received).  HFT/algo traders who desire to capture the rent created by a tick>adverse selection cost should complain the loudest about spoofing–and are most likely to drop the dime on spoofers.  Casual empiricism supports at least the first of these predictions.

That is, as my correspondent suggested to me, not only are spoofing prosecutions driven by ambitious prosecutors looking for easy and unsympathetic targets, they generate political support from potentially politically influential firms.

One way to test this theory would be to cut tick sizes–and see who squeals the loudest.  Three guesses as to whom this might be, and the first two don’t count.

Print Friendly, PDF & Email

October 17, 2018

The Harm of a Spoof: $60 Million? More Like $10 Thousand

Filed under: Commodities,Derivatives,Economics,Exchanges,Regulation — cpirrong @ 4:08 pm

My eyes popped out when I read this statement regarding the DOJ’s recent criminal indictment (which resulted in some guilty pleas) for spoofing in the S&P 500 futures market:

Market participants that traded futures contracts in these three markets while the spoof orders distorted market prices incurred market losses of over $60 million.

$60 million in market losses–big number! For spoofing! How did they come up with that?

The answer is embarrassing, and actually rather disgusting.

The DOJ simply calculated the notional value of the contracts that were traded pursuant to the alleged spoofing scheme.  They took the S&P 500 futures price (e.g., 1804.50), multiplied that by the dollar value of a price point ($50), and multiplied that by the “approximate number of fraudulent orders placed” (e.g., 400).

So the defendants traded futures contracts with a notional value of approximately $60+ million.  For the DOJ to say that anyone “incurred market losses of over $60 million” based on this calculation is complete and utter bollocks.  Indeed, if someone touted that their trading system earned market profits of $60 million based on such a calculation in order to get business from the gullible, I daresay the DOJ and SEC would prosecute them for fraud.

This exaggeration is of a piece with the Sarao indictment, which claimed that his spoofing caused the Flash Crash.

And of course the financial press credulously regurgitated the number the DOJ put out.

I know why DOJ does this–it makes the crime look big and important, and likely matters in sentencing.  But quite frankly, it is a lie to claim that this number accurately represents in any way, shape, or form the economic harm caused by spoofing.

This gets to the entire issue of who is damaged by spoofing, and how.  Does spoofing induce someone to cross the spread and incur the bid/ask, who would otherwise not have entered an aggressive order?  Does it cause someone to cancel a limit order, and therefore lose the opportunity to trade against an aggressive order and thereby earn the spread (the realized spread, not the quoted spread, in order to account for losses to better-informed traders)?

Those are realistic theories of harm, and they imply that the economic harm per contract is on the order of a tick in a liquid market like the ES.  That is, per contract executed as a result of the spoof, the damage is .25 (the tick size) times $50 (the value of an S&P point).  That is, a whopping $12.50.  So, pace the DOJ, the ~800 “fraudulent orders placed caused economic harm of about 10,000 bucks, not 60 mil.  Maybe $20,000, under the theory that in a particular spoof, someone lost from crossing the spread, and someone else lost out on the opportunity to earn the spread.  (Though interestingly, from a social perspective, that is a transfer not a true loss.)

But $10,000 or $20,000 looks rather pathetic, compared to say $60 million, doesn’t it?  What’s three orders of magnitude between friends, eh?

Yes, maybe the DOJ just included a few episodes in the indictment, because that is sufficient for a criminal prosecution and conviction.  But even a lot more of such episodes does not add up to a lot of money.

This is precisely why I find the expenditure of substantial resources to prosecute spoofing to be so dubious.  There is other financial market wrongdoing that is far more harmful, which often escapes prosecution.  Furthermore, efficient punishment should be sized to the harm.  People pay huge fines, and go to jail–for years–for spoofing.  That punishment is hugely disproportionate to the loss, under the theory of harm that I advance here.  So spoofing is over-deterred.

Perhaps there are other theories of harm that justify the severe punishments for spoofing.  If so, I’d like to hear them–I haven’t yet.

These spoofing prosecutions appear to be a case of the drunk looking for his wallet (or a scalp) under the lamppost, because the light is better there.  In the electronic trading era, spoofing is possible–and relatively cheap to detect ex post.  So just trawl through the trading data for evidence of spoofing, and voila!–a criminal prosecution is likely to appear.  A lot easier than prosecuting market power manipulations that can cause nine and ten figure market losses.  (For an example of the DOJ’s haplessness in a prosecution of that kind of case, see US v. Radley.)

Spoofing is the kind of activity that is well within the competence of exchanges to detect and punish using their ordinary disciplinary procedures.  There’s no need to make a federal case out of it–literally.

The time should fit the crime.  The Department of Justice wildly exaggerates the crime of spoofing in order to rationalize the time.  This is inefficient, and well, just plain unjust.

Print Friendly, PDF & Email

September 25, 2018

Default Is Not In Our Stars, But In Our (Power) Markets: Defaulting on Power Spread Trades Is Apparently a Thing

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Regulation — cpirrong @ 6:34 pm

Some other power traders–this time in the US–blowed up real good.   Actually preceding the Aas Nasdaq default by some months, but just getting attention in the mainstream press today, a Houston-based power trading company–GreenHat–defaulted on long-term financial transmission rights contracts in PJM.  FTRs are financial contracts that have cash-flows derived from the spread between prices at different locations in PJM.  Locational spreads in power markets arise due to transmission congestion, so FTRs can be used to hedge the risk of congestion–or to speculate on it.  FTRs are auctioned regularly.  In 2015 GreenHat bought at auction FTRs for 2018.  These positions were profitable in 2015 and 2016, but improvements in PJM transmission caused them to go underwater substantially in 2018.  In June, GreenHat defaulted, and now PJM is dealing with the mess.

The cost of doing so is still unknown.  Under PJM rules, the organization is required to liquidate defaulted positions.  However, the bids PJM received for the defaulted portfolio were 4x-6x the prevailing secondary market price, due to the size of the positions, and the illiquidity of long-term FTRs–with “long term” being pretty much anything beyond a month.  Hence, PJM has requested FERC for a waiver to the requirement for immediate liquidation, and the PJM membership has voted to suspend liquidating the defaulted positions until November 30.

PJM members are on the hook for the defaulted positions.  The positions were underwater to the tune of $110 million as of June–and presumably this was based on market prices, meaning that the cost of liquidating these positions would be multiples of that.  In other words, this blow up could put Aas to shame.

PJM operates the market on a credit system, and market participants can be required to post additional collateral.  However, long-term FTR credit is determined only on an annual basis: “In conjunction with the annual update of historical activity that is used in FTR credit requirement calculations, PJM will recalculate the credit requirement for long-term FTRs annually, and will adjust the Participant’s credit requirement accordingly. This may result in collateral calls if requirements increase.”  Credit on shorter-dated positions are calculated more frequently: what triggered the GreenHat default was a failure to make its payment on its June FTR obligation.

This event is resulting in calls for a re-examination of  PJM’s FTR credit scheme.  As well it should!  However, as the Aas episode demonstrates, it is a fraught exercise to determine the exposure in electricity spread transactions.  This is especially true for long-dated positions like the ones GreenHat bought.

The PJM episode reinforces the Aas episode’s lessons the challenges of handling defaults–especially of big positions in illiquid instruments.  Any auction is very likely to turn into a fire sale that exacerbates the losses that caused the default in the first place.  Moral of the story: mutualizing default risk (either through a CCP, or a membership organization like PJM) can impose big losses on the participants in risk pool.

The dilemma is that the instruments in question can provide valuable benefits, and that speculators can be necessary to achieve these benefits.  FTRs are important because they allow hedging of congestion risk, which can be substantial for both generation and load: locational spreads can be very volatile due to a variety of factors, including the lack of storability of power, non-convexities in generation (which can make it very costly to reduce generation behind a constraint), and generation capacity constraints and inelastic demand (which make it very costly to increase generation or reduce consumption on the other side of the constraint).  So FTRs play a valuable hedging role, and in most markets financial players are needed to absorb the risk.  But that creates the potential for default, and the very factors that make FTRs valuable hedging tools can make defaults very costly.

FTR liquidity is also challenged by the fact that unlike hedging say oil price risk or corn price risk, where a standard contract like Brent or CBT corn can provide a pretty good hedge for everyone, every pair of locations is a unique product that is not hedged effectively by an FTR based on another pair of locations.  The market is therefore inherently fragmented, which is inimical to liquidity.  This lack of liquidity is especially devastating during defaults.

So PJM (and other RTOs) faces a dilemma.  As the Nasdaq event shows, even daily marking to market and variation margining can’t prevent defaults.  Furthermore, moving to a no-credit system (like a CCP) isn’t foolproof, and is likely to be so expensive that it could seriously impair the FTR market.

We’ve seen two default examples in electricity this past summer.  They won’t be the last, due the inherent nature of electricity.

 

Print Friendly, PDF & Email

September 24, 2018

The SEC Commissioner’s Just So Story That Just Ain’t So

Filed under: Derivatives,Economics,Exchanges,Regulation — cpirrong @ 7:06 pm

SEC Commissioner Robert J. Jackson is getting a lot of attention for a policy speech he gave at George Mason University last week.  Alas, Commissioner Jackson betrays only a dim understanding of current stock markets and stock market history.  Indeed, perhaps the best summary of his speech would be the Artemis Ward quip: “It ain’t so much the things we don’t know that get us into trouble, it’s the things we do know that just ain’t so.”

Mr. Jackson has a just-so story that, well, just ain’t so.  In his story, once upon a time US stock markets were faithful guardians of the public interest.  Then, the SEC let them become for-profit firms, and it all went wrong:

Given power and a profit motive, even the most storied institutions will do what they must to maximize their wealth. And nowhere has this been more true than in our stock markets.

For over a century, exchanges were collectively owned not-for-profits, overseeing and organizing trading in America’s best-known companies. But about a decade ago, exchanges became private corporations, designed—perhaps even obligated—to maximize profits. Yet we at the SEC have far too often continued to treat the exchanges with the same kid gloves we applied to their not-for-profit ancestors. The result is that, even while one our fundamental mandates is to encourage competition, the SEC has stood on the sidelines while enormous market power has become concentrated in just a few players. That’s a key reason why among our 13 public stock exchanges, 12 are owned by just three corporations. And that’s how the stock exchanges that are a symbol of American capitalism have developed puzzling practices that look nothing like the competitive marketplaces investors deserve.

. . .

First, one might wonder how our stock markets got here. The answer is that stock exchanges have been better at extracting rents than regulators have been at stopping them. As you all know, in 1934, the Nation struck a bargain with our stock exchanges: the Commission was created to oversee the markets, and in turn the exchanges were given wide latitude in organizing their affairs. For generations, this system served investors well. But then the world changed, and the SEC allowed exchanges to become for-profit corporations with both regulatory and profit-seeking mandates.

At the time, the Commission didn’t sufficiently contemplate the effects that decision might have; we simply said that we saw no reason to think that exchanges couldn’t play the role of regulator and pursue profit at the same time. Maybe we were wrong. Whatever one thinks about the benefits or drawbacks of those events, we should all agree that for-profit companies can be counted on to do one thing: pursue profit. And in for-profit hands, SEC oversight designed for not-for-profit exchanges can be dangerous.

Where to begin?

Well, I guess I should begin by saying for probably the billionth time (here’s one of them) that stock markets were not non-profits out of some charitable motive, or to ensure that they acted in the public interest by self-regulating markets free of conflict of interest and mercenary motive.  In fact, stock exchanges (and derivatives exchanges) adopted the not-for-profit form to protect the rents of their members.  Furthermore, the exchanges self-regulated in ways that maximized the profits of their members: it is beyond a joke to say that exchanges are better at extracting rents today than during the halcyon non-profit years.  Non-profit exchanges just extracted rents in different ways, and the rents did not flow through the exchange coffers.  These different ways included naked collusion–which the SEC tolerated for years, kid gloves indeed!–as well as entry restrictions (the number of members remaining fixed since the 19th century) and various rules advantaging intermediaries (especially specialists, but also brokers).

As for conflicts of interest–they were rife in Commissioner Jackson’s good old days.  The exchanges, as agents for their intermediary member-owners, had structural conflicts with the investing public.

Mr. Jackson argues that “modern exchanges tax ordinary investors.”  The implicit claim is that old time exchanges didn’t.  Ha! They just did it in different ways, and arguably levied far greater taxes then than now.

Why were the taxes arguably greater then?  The answer relates to another fundamental error in Jackson’s just so story: “enormous market power has become concentrated in just a few players. That’s a key reason why among our 13 public stock exchanges, 12 are owned by just three corporations.”  Er, prior to RegNMS, a little over a decade ago, and for the entire life of the SEC prior to that time, and prior to the formation of the SEC, the NYSE had a far more dominant position than any exchange does today.  Due to network effects, it basically had a lock on order flow for its listings.  Its market share was routinely above 85 percent, and that other 15 percent was basically cream skimming competition that the SEC only grudgingly accepted.

Again, the NYSE did not capture rents from this market power by charging higher prices and passing the revenues through to owners in the form of dividends.  But through broker cartels, and after the SEC finally bestirred itself to end the broker cartels, through entry limits and rules that advantaged members, it permitted its members to earn rents by charging higher prices for their services.

Indeed, the great benefit of RegNMS is that it undermined the liquidity network effect that largely immunized the NYSE against competition, and unleashed competition for order flow unprecedented in the history of US stock markets–or stock markets anywhere, for that matter.  Three (granting arguendo that 3 rather than 13 is the right number) is a helluva lot more competitive than one.

But Commissioner Jackson cannot see the glass is at least 90 percent full: he frets over the 10 percent (or less) that is empty.  He laments “fragmentation.”

Yes.  As I have written, the “fragmentation” (aka “competition”) that has occurred post-RegNMS has its costs–some of which are the result of problematic features in RegNMS.  Others are inherent in any multi-market system.  Fragmentation creates arbitrage opportunities that some participants capture through spending real resources: this is probably socially wasteful.  Commissioner Jackson notes that these opportunities exist in part due to the lack of incentive of exchanges to invest in the public data feed: well, I’ve noted this public goods problem in the past (note the date–almost 5 years ago).  Yes, some have information advantages due now mainly to speed: well, back in the day, people on the floor had information advantages–and speed advantages–due to their proximity to where price discovery was taking place.  Take it as a law: there will always be a class of traders with information, access and speed advantages over the hoi polloi.

Some of these problems could be remedied by better regulation.  But despite the deficiencies of RegNMS, there is no doubt that it made US equity markets far more competitive, and that this has redounded to the benefit of ordinary investors–and pretty much the entire buy side, including institutions.  RegNMS dramatically reduced the “tax” that stock markets levied on investors, not increased it as Mr. Jackson apparently believes.

Commissioner Jackson questions whether the limited exposure to lawsuits that exchanges currently enjoy is justified.  That is a legitimate question, but Mr. Jackson’s motivation for asking it is completely off-base.  His fixation on for-profit again shines through: “Finally, we should take a hard look at whether it makes sense to allow for-profit exchanges to write the rules of the game for their customers and competitors while also enjoying immunity from civil liability.”  Mr. Jackson: it is equally questionable whether it makes sense “to allow non-profit exchanges to write the rules of the game for their customers and competitors while also enjoying immunity from civil liability.”

Commissioner Jackson also questions pricing practices: “Finally, SEC and FINRA rules for best execution have clearly left open opportunities for conflicts of interest that hurt investors. The reason is that exchanges offer controversial payments—they call them rebates—to brokers based on the volume of customer orders that broker sends to that exchange.”  This is a form of price competition.  Yes, there are agency issues involved here, but if anything these rebates reduce the rents that exchanges earn that exercise Commissioner Jackson so greatly.  Perhaps brokers don’t pass 100 percent of the rebates to their customers–but this is a distributive issue not an efficiency one, and competition between brokers mitigates this problem.

Perhaps in the category of “rebates” Commissioner Jackson is including maker-taker payments. But the interpretation of these payments–and the more prosaic order flow incentives Mr. Jackson describes–is greatly complicated by the fact that exchanges are multi-sided platforms.  It is well-known that the pricing policies of multi-sided platforms often involve cross-subsidies among customer groups (e.g., liquidity suppliers and liquidity demanders), and that these pricing strategies can be economically efficient.

US securities market structure could certainly be improved.  But reasonable improvements must be grounded in a reasonable understanding of the economics of exchanges.  Alas, one individual responsible for improving market structure is clearly operating from a seriously defective understanding. Commissioner Jackson’s bugbear–for-profit exchanges–have to a first approximation nothing to do with whatever ails US markets.  He pines for an era that not only never existed, but which was in reality worse on almost every dimension that he criticizes modern markets for–competition, rent seeking, and conflicts of interest.

The SEC actually performed a public service–something not to be taken for granted for a public agency!–by breaking the liquidity network effect and opening stock markets to competition through the adoption of RegNMS.  Tweak RegNMS to improve market performance, Commissioner Jackson, rather than advocating proposals based on just so stories that just ain’t–and weren’t–so.

Print Friendly, PDF & Email

September 20, 2018

The Smoke is Starting to Clear from the Aas/Nasdaq Blowup

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — cpirrong @ 11:08 am

Amir Khwaja of Clarus has a very informative post about the Nasdaq electricity blow-up.

The most important point: Nasdaq uses SPAN to calculate IM.  SPAN was a major innovation back in the day, but it is VERY long in the tooth now (2018 is its 30th birthday!).  Moreover, the most problematic part of SPAN is the ad hoc way it handles dependence risk:

  • Intra-commodity spreading parameters – rates and rules for evaluating risk among portfolios of closely related products, for example products with particular patterns of calendar spreads
  • Inter-commodity spreading parameters – rates and rules for evaluating risk offsets between related product

…..

CME SPAN Methodology Combined Commodity Evaluations

The CME SPAN methodology divides the instruments in each portfolio into groupings called combined commodities. Each combined commodity represents all instruments on the same ultimate underlying – for example, all futures and all options ultimately related to the S&P 500 index.

For each combined commodity in the portfolio, the CME SPAN methodology evaluates the risk factors described above, and then takes the sum of the scan risk, the intra-commodity spread charge, and the delivery risk, before subtracting the inter-commodity spread credit. The CME SPAN methodology next compares the resulting value with the short option minimum; whichever value is larger is called the CME SPAN methodology risk requirement. The resulting values across the portfolio are then converted to a common currency and summed to yield the total risk for the portfolio.

I would not be surprised if the handling of Nordic-German spread risk was woefully inadequate to capture the true risk exposure.  Electricity spreads are strange beasts, and “rules for evaluating risk offsets” are unlikely to capture this strangeness correctly especially given the fact that electricity markets have idiosyncrasies that one-size-fits all rules are unlikely to capture.  I also conjecture that Aas knew this, and loaded the boat with this spread trade because he knew that the risk was grossly underpriced.

There are reports that the Nasdaq margin breach at the time of default (based on mark-to-market prices) was not nearly as large as the €140 million hit to the default fund.  In these accounts, the bulk of the hit was due to the fact that the price at which Aas’ portfolio was auctioned off included a substantial haircut to prevailing market prices.

Back in the day, I argued that one of the real advantages to central clearing was a more orderly handling of defaulted portfolios than the devil-take-the-hindmost process in OTC bilateral markets (cf., the outcome of the LTCM disaster almost exactly 20 years ago–with the Fed midwifed deal being completed on 23 September, 1998). (Ironically spread trades were the cause of LTCM’s demise too.)

But the devil is in the details of the auction, and in market conditions at the time of the default–which are almost certainly unsettled, hence the default.  The CME was criticized for its auction of the defaulted Lehman positions: the bankruptcy trustee argued that the price CME obtained was too low, thereby harming the creditors.   The sell-off of the Amaranth NG positions in September, 2006 (what is it about September?!?) to JP Morgan and Citadel (if memory serves) was also at a huge discount.

Nasdaq has been criticized for allowing only 4 firms to bid: narrow participation was also the criticism leveled at CME and NYMEX clearing in the Lehman and Amaranth episodes, respectively.  Nasdaq argues that telling the world could have sparked panic.

But this episode, like Lehman and Amaranth before it, demonstrate the challenges to auctioning big positions.  Only a small number of market participants are likely to have the capital, or the risk appetite, to take on a big defaulted position in its entirety.  Thus, limited participation is almost inevitable, and even if Nasdaq had invited more bidders, there is room to doubt whether the fifth or sixth or seventh bidder would have been able to compete seriously with the four who actually participated.  Those who have the capital and risk appetite to bid seriously for big positions will almost certainly demand a big discount to  compensate for the risk of holding the position until they can work it off.  Moreover, limited participation limits competition, which should exacerbate the underpricing problem.

Thus, even with a structured auction process, disposing of a big defaulted portfolio is almost inevitably something of a fire sale.  This is a risk borne by the participants in the default fund.  Although the exposure via the default fund is sometimes argued to be an incentive for the default fund participants to bid aggressively, this is unlikely because there are externalities: the aggressive bidder bears all the risks and costs, and provides benefits to the rest of the other members.  Free riding is a big problem.

In theory, equitizing the risk might improve outcomes.  By selling shares in the defaulted portfolio, no single or two bidders would have to absorb the entire position and risk could be spread more efficiently: this could reduce the risk discount in the price.  But who would manage the portfolio?  What are the mechanics of contributing to IM and VM?  Would it be like a bad bank, existing as a zombie until the positions rolled off?

Another follow-up from my previous post relates to the issue of self-clearing.  On Twitter and elsewhere, some have suggested that clearing through a 3d party would have been an additional check.  Surely an FCM would be less likely to fall in love with a position than the trader who puts it on, but the effectiveness of the FCM as a check depends on its evaluation of risk, and it may be no smarter than the CCP that sets margins.   Furthermore, there are examples of FCMs having the same trade in their house account as one of their big customers–perhaps because they think the client is really smart and they want to free ride off his genius.  As a historical example, Griffin Trading had a big trade in the same instrument and direction as its biggest client.  The trade went pear-shaped, the client defaulted, and Griffin did too.

I also need to look to see whether Nasdaq Commodities uses the US futures clearing model, which does not segregate positions.  If it does, and if Aas had cleared through an FCM, it is possible that the FCM’s clients could have lost money as a result of his default.  This model has fellow-customer risk: by clearing for himself, Aas did not create such a risk.

I also note that the desire to expand clearing post-Crisis has made it difficult and more costly for firms to find FCMs.  This problem has been exacerbated by the Supplementary Leverage Ratio.  Perhaps the cost of clearing through an FCM appeared excessive to Aas, relative to the alternative of self-clearing.  Thus, if regulators blanch at the thought of self-clearing (not saying that they should), they should get serious about addressing the FCM cost issue, and regulations that inflate these costs but generate little offsetting benefit.

Again, this episode should spark (no pun intended!) a more thorough reconsideration of clearing generally.  The inherent limitations of margin models, especially for more complex products or markets.  The adverse selection problems that crude risk models can create.  The challenges of auctioning defaulted portfolios, and the likelihood that the auctions will become fire sales.  The FCM capacity issue.

The supersizing of clearing in the post-Crisis world has also supersized all of these concerns.  The Aas blowup demonstrates all of them.  Will CCPs and regulators take heed? Or will some future September bring us the mother of all blowups?

Print Friendly, PDF & Email

September 18, 2018

He Blowed Up Real Good. And Inflicted Some Collateral Damage to Boot

I’m on my way back from my annual teaching sojourn in Geneva, plus a day in the Netherlands for a speaking engagement.  While I was taking that European non-quite-vacation, a Norwegian power trader, Einar Aas, suffered a massive loss in cleared spread trades between Nordic and German electricity.  The loss was so large that it blew through Aas’ initial margin and default fund contribution to the clearinghouse (Nasdaq), consumed Nasdaq’s €7 million capital contribution to the default fund, and €107 million of the rest of the default fund–a mere 66 percent of the fund.  The members have been ordered to contribute €100 million to top up the fund.

This was bound to happen. In a way, it was good that it happened in a relatively small market.  But it provides a sobering demonstration of what I’ve said for years: clearing doesn’t eliminate losses, but affects the distribution of losses.  Further, financial institutions that back CCPs–the members–are the ultimate backstops.  Thus, clearing does not eliminate contagion or interconnections in the financial network: it just changes the topology of the network, and the channels by which losses can hit the balance sheets of big players.

Happening in the Nordic/European power markets, this is an interesting curiosity.  If it happens in the interest rate or equity markets, it could be a disaster.

We actually know very little about what happened, beyond the broad details.  We know Aas was long Nordic power and short German power, and that the spread widened due to wet weather in Norway (which depresses the price of hydro and reduces demand) and an increase in European prices due to increases in CO2 prices.  But Nasdaq trades daily, weekly, monthly, quarterly, and annual power products: we don’t know which blew up Aas.  Daily spreads are more volatile, and exhibit more extremes (kurtosis), but since margins are scaled to risk (at least theoretically–more on this below) what matters is the market move relative to the estimated risk.  Reports indicate that the spread moved 17x the typical move, but we don’t know what measure of “typical” is used here.  Standard deviation?  Not a very good measure when there is a lot of kurtosis (or skewness).

I also haven’t seen how big Aas’ initial margins were.  The total loss he suffered was bigger than the hit taken by the default fund, because under the loser-pays model, the initial margins would have been in the first loss position.

The big question in my mind relates to Nasdaq’s margin model.  Power price distributions deviate substantially from the Gaussian, and estimating those distributions is challenging in part because they are also conditional on day of the year and hour of the day, and on fundamental supply-demand conditions: one model doesn’t fit every day, every hour, every season, or every weather enviornment.  Moreover, a spread trade has correlation risk–dependence risk would be a better word, given that correlation is a linear measure of dependence and dependencies in power prices are not linear.  How did Nasdaq model this dependence and how did that impact margins?

One possibility is that Nasdaq’s risk/margin model was good, but this was just one of those things.  Margins are set on the basis of the tails, and tail events occur with some probability.

Given the nature of the tails in power prices (and spreads) reliance on a VaR-type model would be especially dangerous here.  Setting margin based on something like expected shortfall would likely be superior here.  Which model does Nasdaq use?

I can also see the possibility that Nasdaq’s margin model was faulty, and that Aas had figured this out.  He then put on trades that he knew were undermargined because Nasdaq’s model was defective, which allowed him to take on more risk than Nasdaq intended.

In my early work on clearing I indicted that this adverse selection problem was a concern in clearing, and would lead CCPs–and those who believe that CCPs make the financial system safer–to underestimate risk and be falsely complacent.  Indeed, I argued that one reason clearing could be a bad idea is that it was more vulnerable to adverse selection problems because the need to model the distribution of gains/losses on cleared positions requires detailed knowledge, especially for more exotic products.  Traders who specialize in these products are likely to have MUCH better understanding about risks than a non-specialist CCP.

Aas cleared for himself, and this has caused some to get the vapors and conclude that Nasdaq was negligent in allowing him to do so.  Self-clearing is just an FCM with a house account, but with no client business: in some respects that’s less risky than a traditional FCM with client business as well as its own trading book.

Nasdaq required Aas to have €70 million in capital to self-clear.  Presumably Nasdaq will get some of that capital in an insolvency proceeding, and use it to repay default fund members–meaning that the €114 million loss is likely an overestimate of the ultimate cost borne by Nasdaq and the clearing members.

Further, that’s probably similar to the amount of capital that an FCM would have had to have to carry a client position as big as Aas’.   That’s not inherently more risky (to the clearinghouse and its default fund) than if Aas had cleared through another firm (or firms).  Again, the issue is whether Nasdaq is assessing risks accurately so as to allow it to set clearing member capital appropriately.

But the point is that Aas had to have skin in the game to self-clear, just as an FCM would have had to clear for him.

Holding Aas’ positions constant, whether he cleared himself or through an FCM really only affected the distribution of losses, but not the magnitude.  If Aas had cleared through someone else, that someone else’s capital would have taken the hit, and the default fund would have been at risk only if that FCM had defaulted.  But the total loss suffered by FCMs would have been exactly the same, just distributed more unevenly.

Indeed, the more even distribution that occurred due to mutualization which spread the default loss among multiple FCMs might actually be preferable to having one FCM bear the brunt.

The real issue here is incentives.  My statement was that holding Aas’ positions constant, who he cleared through or whether he cleared at all affected only the distribution of losses.  Perhaps under different structures Aas might not have been able to take on this much risk.  But that’s an open question.

If he had cleared through another FCM, that FCM would have had an incentive to limit its positions because its capital was at risk.  But Aas’ capital was at risk–he had skin in the game too, and this was necessary for him to self-clear.  It’s by no means obvious that an FCM would have arrived at a different conclusion than Aas, and decided that his position represented a reasonable risk to its capital.

Here again a key issue is information asymmetry: would the FCM know more about the risk of Aas’ position, or less?  Given Aas’ allegedly obsessive behavior, and his long-time success as a trader, I’m pretty sure that Aas knew more about the risk than any FCM would have, and that requiring him to clear through another firm would not have necessarily constrained his position.  He would have also had an incentive to put his business at the dumbest FCM.

Another incentive issue is Nasdaq’s skin in the game–an issue that has exercised FCMs generally, not just on Nasdaq.  The exchange’s/CCP’s relatively thin contribution to the default fund arguably reduces its incentive to get its margin model right.  Evaluating whether Nasdaq’s relatively minor exposure to default risk led it to undermargin requires a more thorough analysis of its margin model, which is a very complex exercise which is impossible to do given what we know about the model.

But this all brings me back to themes I flogged to the collective shrug of many–indeed almost all–of the regulatory and legislative community back in the aftermath of the Crisis, when clearing was the silver bullet for future crises.   Clearing is all about the allocation and pricing of counterparty credit risk.  Evaluation of counterparty credit risk in a derivatives context requires a detailed understanding of the price risks of the cleared products, and dependencies between these price risks and the balance sheet risks of participants in cleared markets.  Classic information problems–adverse selection and moral hazard (too little skin in the game)–make risk sharing costly, and can lead to the mispricing of risk.

The forensics about Aas blowing up real good, and the lessons learned from that experience, should focus on those issues.  Alas, I see little recognition of that in the media coverage of the episode, and betting on form, I would wager that the same is true of regulators as well.

The Aas blow up should be a salutary lesson in how clearing really works, what it can do, and what it can’t.   Cynic that I am, I’m guessing that it won’t be.  And if I’m right, the next time could be far, far worse.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress