Streetwise Professor

October 18, 2018

Ticked Off About Spoofing? Consider This

Filed under: Commodities,Derivatives,Economics,Exchanges,Politics,Regulation — cpirrong @ 6:51 pm

An email from a legal academic in response to yesterday’s post spurred a few additional thoughts re spoofing.

One of my theories of spoofing is that is a way to improve one’s position in the queue at the best bid or offer.  Why does one stand in a queue?  Why does one want to be closer to the front?

Simple: because there is a rent there to capture.  Where does the rent come from?  When what you are queuing for is underpriced, likely due to some price control.  Think of gas lines, or queues for sausage in the USSR.

In market making, the rent exists because the benefit from executing at the bid or offer exceeds the cost.  The cost arises from (a) adverse selection costs, and (b) inventory cost/risk and other costs of participation.  What is the source of the price control?: the tick size.

Exchanges set a minimum price increment–the “tick.”  When the tick size exceeds the costs of making a market, there is a rent.  This makes it beneficial to increase the probability of execution of an at-the-market limit order, i.e., if the tick size exceeds the cost of executing a passive order, it pays to game to move up in the queue.  Spoofing is one way of gaming.

This has a variety of implications.

One implication is in the cross section: spoofing should be more prevalent, when the non-adverse selection component of the spread (which is measured by temporary price movements in response to trades) is large.  Relatedly, this implies that spoofing should be more likely, the more negatively autocorrelated are transaction prices, i.e., the bigger the bid-ask bounce.

Another implication is in the time series.  Adverse selection costs can vary over time.  Spoofing should be more prevalent during periods when adverse selection costs are low.  These should also be periods of unusually large negative autocorrelations in transaction prices.

Another implication is that if you want to reduce spoofing  . . .  reduce the tick size.  Given what I just discussed, tick size reductions should be focused on instruments with a bigger bid/ask bounce/larger non-adverse selection driven spread component.

That is, why police the markets and throw people in jail?  Mitigate the problem by reducing the incentive to commit the offense.

This story also has implications for the political economy of spoofing prosecution (which was the main thrust of the email I received).  HFT/algo traders who desire to capture the rent created by a tick>adverse selection cost should complain the loudest about spoofing–and are most likely to drop the dime on spoofers.  Casual empiricism supports at least the first of these predictions.

That is, as my correspondent suggested to me, not only are spoofing prosecutions driven by ambitious prosecutors looking for easy and unsympathetic targets, they generate political support from potentially politically influential firms.

One way to test this theory would be to cut tick sizes–and see who squeals the loudest.  Three guesses as to whom this might be, and the first two don’t count.

Print Friendly, PDF & Email

October 17, 2018

The Harm of a Spoof: $60 Million? More Like $10 Thousand

Filed under: Commodities,Derivatives,Economics,Exchanges,Regulation — cpirrong @ 4:08 pm

My eyes popped out when I read this statement regarding the DOJ’s recent criminal indictment (which resulted in some guilty pleas) for spoofing in the S&P 500 futures market:

Market participants that traded futures contracts in these three markets while the spoof orders distorted market prices incurred market losses of over $60 million.

$60 million in market losses–big number! For spoofing! How did they come up with that?

The answer is embarrassing, and actually rather disgusting.

The DOJ simply calculated the notional value of the contracts that were traded pursuant to the alleged spoofing scheme.  They took the S&P 500 futures price (e.g., 1804.50), multiplied that by the dollar value of a price point ($50), and multiplied that by the “approximate number of fraudulent orders placed” (e.g., 400).

So the defendants traded futures contracts with a notional value of approximately $60+ million.  For the DOJ to say that anyone “incurred market losses of over $60 million” based on this calculation is complete and utter bollocks.  Indeed, if someone touted that their trading system earned market profits of $60 million based on such a calculation in order to get business from the gullible, I daresay the DOJ and SEC would prosecute them for fraud.

This exaggeration is of a piece with the Sarao indictment, which claimed that his spoofing caused the Flash Crash.

And of course the financial press credulously regurgitated the number the DOJ put out.

I know why DOJ does this–it makes the crime look big and important, and likely matters in sentencing.  But quite frankly, it is a lie to claim that this number accurately represents in any way, shape, or form the economic harm caused by spoofing.

This gets to the entire issue of who is damaged by spoofing, and how.  Does spoofing induce someone to cross the spread and incur the bid/ask, who would otherwise not have entered an aggressive order?  Does it cause someone to cancel a limit order, and therefore lose the opportunity to trade against an aggressive order and thereby earn the spread (the realized spread, not the quoted spread, in order to account for losses to better-informed traders)?

Those are realistic theories of harm, and they imply that the economic harm per contract is on the order of a tick in a liquid market like the ES.  That is, per contract executed as a result of the spoof, the damage is .25 (the tick size) times $50 (the value of an S&P point).  That is, a whopping $12.50.  So, pace the DOJ, the ~800 “fraudulent orders placed caused economic harm of about 10,000 bucks, not 60 mil.  Maybe $20,000, under the theory that in a particular spoof, someone lost from crossing the spread, and someone else lost out on the opportunity to earn the spread.  (Though interestingly, from a social perspective, that is a transfer not a true loss.)

But $10,000 or $20,000 looks rather pathetic, compared to say $60 million, doesn’t it?  What’s three orders of magnitude between friends, eh?

Yes, maybe the DOJ just included a few episodes in the indictment, because that is sufficient for a criminal prosecution and conviction.  But even a lot more of such episodes does not add up to a lot of money.

This is precisely why I find the expenditure of substantial resources to prosecute spoofing to be so dubious.  There is other financial market wrongdoing that is far more harmful, which often escapes prosecution.  Furthermore, efficient punishment should be sized to the harm.  People pay huge fines, and go to jail–for years–for spoofing.  That punishment is hugely disproportionate to the loss, under the theory of harm that I advance here.  So spoofing is over-deterred.

Perhaps there are other theories of harm that justify the severe punishments for spoofing.  If so, I’d like to hear them–I haven’t yet.

These spoofing prosecutions appear to be a case of the drunk looking for his wallet (or a scalp) under the lamppost, because the light is better there.  In the electronic trading era, spoofing is possible–and relatively cheap to detect ex post.  So just trawl through the trading data for evidence of spoofing, and voila!–a criminal prosecution is likely to appear.  A lot easier than prosecuting market power manipulations that can cause nine and ten figure market losses.  (For an example of the DOJ’s haplessness in a prosecution of that kind of case, see US v. Radley.)

Spoofing is the kind of activity that is well within the competence of exchanges to detect and punish using their ordinary disciplinary procedures.  There’s no need to make a federal case out of it–literally.

The time should fit the crime.  The Department of Justice wildly exaggerates the crime of spoofing in order to rationalize the time.  This is inefficient, and well, just plain unjust.

Print Friendly, PDF & Email

September 26, 2018

We’re From the International Maritime Organization, and We’re Here to Help You: The Perverse Economics of New Maritime Fuel Standards

Filed under: Climate Change,Commodities,Economics,Energy,Politics,Regulation — cpirrong @ 6:26 pm

This Bloomberg piece from last month claims that the International Maritime Organization’s looming 2020 caps on sulfur emissions from ships “could lift crude prices by $4 a barrel when the measures come into effect in 2020.”

Not so fast.  It depends on what you mean by “crude.”  According to the International Oil Handbook, there are 195 different streams of crude oil.  Crucially, the sulfur content of these crudes varies from basically zero to 5.9 percent.  There is no such thing the price of “crude,” in other words.

The IMO regulation will have different impacts on different crudes.  It will no doubt cause the spread between sweet and sour crudes to widen.  This happened in 2008, when European regulation mandating low sulfur diesel kicked in: this regulation contributed to the spike in benchmark Brent and WTI prices, and wide spreads in crude prices.  During this time, (if memory serves) 10 VLCCs full of Iranian crude were swinging at anchor while WTI and Brent prices were screaming higher and sweet crude inventories were plunging precisely due to the fact that the regulation increased the demand for sweet crude and depressed demand for heavier, more sour varieties.

The IMO regulation will definitely reduce the demand for crude oil overall.   The demand for crude is derived from the demand for fuels, notably transportation fuels.  The regulation increases the cost of some transportation fuels, which decreases the (derived) demand for crude.  This change will not be distributed evenly, with demand for light, sweet crudes actually increasing, but demand for sour crudes falling, with the fall being bigger, the more sour the crude.

The regulation will hit ship operators hard, and they will pass on the higher cost to shippers.  In the short run, carriers will eat some of the cost–perhaps the bulk of it.  But the long run supply elasticity of shipping is large (arguably close to perfectly elastic), meaning after fleet size adjusts shippers will bear the brunt.

The burden will fall heaviest on commodities, for which shipping cost is large relative to value.  Therefore, farmers and miners will receive lower prices, and consumers will pay higher prices for commodity-intensive goods.  Further, this regulatory tax will be highly regressive, falling on relatively low income individuals, who pay a higher share of their income on such goods.

This seems to be a case of almost all pain, little gain.  The ostensible purpose of the regulation is to reduce pollution from sulfur emissions.  Yes, ships will produce less such emissions, but due to the joint product nature of refined petroleum, overall sulfur emissions will fall far less.

Many ships currently use “bottom of the barrel” fuel oil that tend to be higher in sulfur.  Many will achieve compliance by shifting to middle distillates.  But the bottom of the barrel won’t go away.  Over the medium to longer term, refineries will make investments that allow them to squeeze more middle distillates out of a barrel of crude, or to remove some sulfur, but inevitably refineries will produce some low-quality, high sulfur products: the sulfur has to go somewhere.  This is inherent in the joint nature of fuel production.

And yes, there will be some adjustments on the crude supply side, with the differential between sweet and sour crude favoring production of the former over the latter.   But sour crudes will be produced, and new discoveries of sour crude will be developed.

Meaning that although consumption of high sulfur fuels by ships will go down, since (a) in equilibrium consumption equals production, and (b) due to the joint nature of production the output of high sulfur fuels will go down less than its consumption by ships does, someone will consume most of the fuel oil that ships no longer used.  And since someone is consuming it, they will emit the sulfur.

The most likely (near term) use of fuel oil is for power generation.  The Saudis are planning to ramp up the use of 3.5 percent sulfur fuel oil to generate power for AC and desalinization.  Other relatively poor countries (e.g., Bangladesh, Pakistan) are also likely to have an appetite for cheap high sulfur fuel oil to generate electricity.

The ultimate result will be a regulation that basically shifts who produces the sulfur emissions, with a far smaller impact on the total amount of emissions.

This represents a tragic–and classic–example of a regulation imposed on a segment of a larger market.  The pernicious effects of such a narrow regulation are particularly acute in oil, due to the joint nature of production.

Given the efficiency and distributive effects of the IMO, it is almost certainly not a second best policy.  Indeed, it is more likely to be a second worst policy.  Or maybe a first worst policy: doing nothing at all is arguably better.

 

Print Friendly, PDF & Email

September 25, 2018

Default Is Not In Our Stars, But In Our (Power) Markets: Defaulting on Power Spread Trades Is Apparently a Thing

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Regulation — cpirrong @ 6:34 pm

Some other power traders–this time in the US–blowed up real good.   Actually preceding the Aas Nasdaq default by some months, but just getting attention in the mainstream press today, a Houston-based power trading company–GreenHat–defaulted on long-term financial transmission rights contracts in PJM.  FTRs are financial contracts that have cash-flows derived from the spread between prices at different locations in PJM.  Locational spreads in power markets arise due to transmission congestion, so FTRs can be used to hedge the risk of congestion–or to speculate on it.  FTRs are auctioned regularly.  In 2015 GreenHat bought at auction FTRs for 2018.  These positions were profitable in 2015 and 2016, but improvements in PJM transmission caused them to go underwater substantially in 2018.  In June, GreenHat defaulted, and now PJM is dealing with the mess.

The cost of doing so is still unknown.  Under PJM rules, the organization is required to liquidate defaulted positions.  However, the bids PJM received for the defaulted portfolio were 4x-6x the prevailing secondary market price, due to the size of the positions, and the illiquidity of long-term FTRs–with “long term” being pretty much anything beyond a month.  Hence, PJM has requested FERC for a waiver to the requirement for immediate liquidation, and the PJM membership has voted to suspend liquidating the defaulted positions until November 30.

PJM members are on the hook for the defaulted positions.  The positions were underwater to the tune of $110 million as of June–and presumably this was based on market prices, meaning that the cost of liquidating these positions would be multiples of that.  In other words, this blow up could put Aas to shame.

PJM operates the market on a credit system, and market participants can be required to post additional collateral.  However, long-term FTR credit is determined only on an annual basis: “In conjunction with the annual update of historical activity that is used in FTR credit requirement calculations, PJM will recalculate the credit requirement for long-term FTRs annually, and will adjust the Participant’s credit requirement accordingly. This may result in collateral calls if requirements increase.”  Credit on shorter-dated positions are calculated more frequently: what triggered the GreenHat default was a failure to make its payment on its June FTR obligation.

This event is resulting in calls for a re-examination of  PJM’s FTR credit scheme.  As well it should!  However, as the Aas episode demonstrates, it is a fraught exercise to determine the exposure in electricity spread transactions.  This is especially true for long-dated positions like the ones GreenHat bought.

The PJM episode reinforces the Aas episode’s lessons the challenges of handling defaults–especially of big positions in illiquid instruments.  Any auction is very likely to turn into a fire sale that exacerbates the losses that caused the default in the first place.  Moral of the story: mutualizing default risk (either through a CCP, or a membership organization like PJM) can impose big losses on the participants in risk pool.

The dilemma is that the instruments in question can provide valuable benefits, and that speculators can be necessary to achieve these benefits.  FTRs are important because they allow hedging of congestion risk, which can be substantial for both generation and load: locational spreads can be very volatile due to a variety of factors, including the lack of storability of power, non-convexities in generation (which can make it very costly to reduce generation behind a constraint), and generation capacity constraints and inelastic demand (which make it very costly to increase generation or reduce consumption on the other side of the constraint).  So FTRs play a valuable hedging role, and in most markets financial players are needed to absorb the risk.  But that creates the potential for default, and the very factors that make FTRs valuable hedging tools can make defaults very costly.

FTR liquidity is also challenged by the fact that unlike hedging say oil price risk or corn price risk, where a standard contract like Brent or CBT corn can provide a pretty good hedge for everyone, every pair of locations is a unique product that is not hedged effectively by an FTR based on another pair of locations.  The market is therefore inherently fragmented, which is inimical to liquidity.  This lack of liquidity is especially devastating during defaults.

So PJM (and other RTOs) faces a dilemma.  As the Nasdaq event shows, even daily marking to market and variation margining can’t prevent defaults.  Furthermore, moving to a no-credit system (like a CCP) isn’t foolproof, and is likely to be so expensive that it could seriously impair the FTR market.

We’ve seen two default examples in electricity this past summer.  They won’t be the last, due the inherent nature of electricity.

 

Print Friendly, PDF & Email

September 20, 2018

The Smoke is Starting to Clear from the Aas/Nasdaq Blowup

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — cpirrong @ 11:08 am

Amir Khwaja of Clarus has a very informative post about the Nasdaq electricity blow-up.

The most important point: Nasdaq uses SPAN to calculate IM.  SPAN was a major innovation back in the day, but it is VERY long in the tooth now (2018 is its 30th birthday!).  Moreover, the most problematic part of SPAN is the ad hoc way it handles dependence risk:

  • Intra-commodity spreading parameters – rates and rules for evaluating risk among portfolios of closely related products, for example products with particular patterns of calendar spreads
  • Inter-commodity spreading parameters – rates and rules for evaluating risk offsets between related product

…..

CME SPAN Methodology Combined Commodity Evaluations

The CME SPAN methodology divides the instruments in each portfolio into groupings called combined commodities. Each combined commodity represents all instruments on the same ultimate underlying – for example, all futures and all options ultimately related to the S&P 500 index.

For each combined commodity in the portfolio, the CME SPAN methodology evaluates the risk factors described above, and then takes the sum of the scan risk, the intra-commodity spread charge, and the delivery risk, before subtracting the inter-commodity spread credit. The CME SPAN methodology next compares the resulting value with the short option minimum; whichever value is larger is called the CME SPAN methodology risk requirement. The resulting values across the portfolio are then converted to a common currency and summed to yield the total risk for the portfolio.

I would not be surprised if the handling of Nordic-German spread risk was woefully inadequate to capture the true risk exposure.  Electricity spreads are strange beasts, and “rules for evaluating risk offsets” are unlikely to capture this strangeness correctly especially given the fact that electricity markets have idiosyncrasies that one-size-fits all rules are unlikely to capture.  I also conjecture that Aas knew this, and loaded the boat with this spread trade because he knew that the risk was grossly underpriced.

There are reports that the Nasdaq margin breach at the time of default (based on mark-to-market prices) was not nearly as large as the €140 million hit to the default fund.  In these accounts, the bulk of the hit was due to the fact that the price at which Aas’ portfolio was auctioned off included a substantial haircut to prevailing market prices.

Back in the day, I argued that one of the real advantages to central clearing was a more orderly handling of defaulted portfolios than the devil-take-the-hindmost process in OTC bilateral markets (cf., the outcome of the LTCM disaster almost exactly 20 years ago–with the Fed midwifed deal being completed on 23 September, 1998). (Ironically spread trades were the cause of LTCM’s demise too.)

But the devil is in the details of the auction, and in market conditions at the time of the default–which are almost certainly unsettled, hence the default.  The CME was criticized for its auction of the defaulted Lehman positions: the bankruptcy trustee argued that the price CME obtained was too low, thereby harming the creditors.   The sell-off of the Amaranth NG positions in September, 2006 (what is it about September?!?) to JP Morgan and Citadel (if memory serves) was also at a huge discount.

Nasdaq has been criticized for allowing only 4 firms to bid: narrow participation was also the criticism leveled at CME and NYMEX clearing in the Lehman and Amaranth episodes, respectively.  Nasdaq argues that telling the world could have sparked panic.

But this episode, like Lehman and Amaranth before it, demonstrate the challenges to auctioning big positions.  Only a small number of market participants are likely to have the capital, or the risk appetite, to take on a big defaulted position in its entirety.  Thus, limited participation is almost inevitable, and even if Nasdaq had invited more bidders, there is room to doubt whether the fifth or sixth or seventh bidder would have been able to compete seriously with the four who actually participated.  Those who have the capital and risk appetite to bid seriously for big positions will almost certainly demand a big discount to  compensate for the risk of holding the position until they can work it off.  Moreover, limited participation limits competition, which should exacerbate the underpricing problem.

Thus, even with a structured auction process, disposing of a big defaulted portfolio is almost inevitably something of a fire sale.  This is a risk borne by the participants in the default fund.  Although the exposure via the default fund is sometimes argued to be an incentive for the default fund participants to bid aggressively, this is unlikely because there are externalities: the aggressive bidder bears all the risks and costs, and provides benefits to the rest of the other members.  Free riding is a big problem.

In theory, equitizing the risk might improve outcomes.  By selling shares in the defaulted portfolio, no single or two bidders would have to absorb the entire position and risk could be spread more efficiently: this could reduce the risk discount in the price.  But who would manage the portfolio?  What are the mechanics of contributing to IM and VM?  Would it be like a bad bank, existing as a zombie until the positions rolled off?

Another follow-up from my previous post relates to the issue of self-clearing.  On Twitter and elsewhere, some have suggested that clearing through a 3d party would have been an additional check.  Surely an FCM would be less likely to fall in love with a position than the trader who puts it on, but the effectiveness of the FCM as a check depends on its evaluation of risk, and it may be no smarter than the CCP that sets margins.   Furthermore, there are examples of FCMs having the same trade in their house account as one of their big customers–perhaps because they think the client is really smart and they want to free ride off his genius.  As a historical example, Griffin Trading had a big trade in the same instrument and direction as its biggest client.  The trade went pear-shaped, the client defaulted, and Griffin did too.

I also need to look to see whether Nasdaq Commodities uses the US futures clearing model, which does not segregate positions.  If it does, and if Aas had cleared through an FCM, it is possible that the FCM’s clients could have lost money as a result of his default.  This model has fellow-customer risk: by clearing for himself, Aas did not create such a risk.

I also note that the desire to expand clearing post-Crisis has made it difficult and more costly for firms to find FCMs.  This problem has been exacerbated by the Supplementary Leverage Ratio.  Perhaps the cost of clearing through an FCM appeared excessive to Aas, relative to the alternative of self-clearing.  Thus, if regulators blanch at the thought of self-clearing (not saying that they should), they should get serious about addressing the FCM cost issue, and regulations that inflate these costs but generate little offsetting benefit.

Again, this episode should spark (no pun intended!) a more thorough reconsideration of clearing generally.  The inherent limitations of margin models, especially for more complex products or markets.  The adverse selection problems that crude risk models can create.  The challenges of auctioning defaulted portfolios, and the likelihood that the auctions will become fire sales.  The FCM capacity issue.

The supersizing of clearing in the post-Crisis world has also supersized all of these concerns.  The Aas blowup demonstrates all of them.  Will CCPs and regulators take heed? Or will some future September bring us the mother of all blowups?

Print Friendly, PDF & Email

September 18, 2018

He Blowed Up Real Good. And Inflicted Some Collateral Damage to Boot

I’m on my way back from my annual teaching sojourn in Geneva, plus a day in the Netherlands for a speaking engagement.  While I was taking that European non-quite-vacation, a Norwegian power trader, Einar Aas, suffered a massive loss in cleared spread trades between Nordic and German electricity.  The loss was so large that it blew through Aas’ initial margin and default fund contribution to the clearinghouse (Nasdaq), consumed Nasdaq’s €7 million capital contribution to the default fund, and €107 million of the rest of the default fund–a mere 66 percent of the fund.  The members have been ordered to contribute €100 million to top up the fund.

This was bound to happen. In a way, it was good that it happened in a relatively small market.  But it provides a sobering demonstration of what I’ve said for years: clearing doesn’t eliminate losses, but affects the distribution of losses.  Further, financial institutions that back CCPs–the members–are the ultimate backstops.  Thus, clearing does not eliminate contagion or interconnections in the financial network: it just changes the topology of the network, and the channels by which losses can hit the balance sheets of big players.

Happening in the Nordic/European power markets, this is an interesting curiosity.  If it happens in the interest rate or equity markets, it could be a disaster.

We actually know very little about what happened, beyond the broad details.  We know Aas was long Nordic power and short German power, and that the spread widened due to wet weather in Norway (which depresses the price of hydro and reduces demand) and an increase in European prices due to increases in CO2 prices.  But Nasdaq trades daily, weekly, monthly, quarterly, and annual power products: we don’t know which blew up Aas.  Daily spreads are more volatile, and exhibit more extremes (kurtosis), but since margins are scaled to risk (at least theoretically–more on this below) what matters is the market move relative to the estimated risk.  Reports indicate that the spread moved 17x the typical move, but we don’t know what measure of “typical” is used here.  Standard deviation?  Not a very good measure when there is a lot of kurtosis (or skewness).

I also haven’t seen how big Aas’ initial margins were.  The total loss he suffered was bigger than the hit taken by the default fund, because under the loser-pays model, the initial margins would have been in the first loss position.

The big question in my mind relates to Nasdaq’s margin model.  Power price distributions deviate substantially from the Gaussian, and estimating those distributions is challenging in part because they are also conditional on day of the year and hour of the day, and on fundamental supply-demand conditions: one model doesn’t fit every day, every hour, every season, or every weather enviornment.  Moreover, a spread trade has correlation risk–dependence risk would be a better word, given that correlation is a linear measure of dependence and dependencies in power prices are not linear.  How did Nasdaq model this dependence and how did that impact margins?

One possibility is that Nasdaq’s risk/margin model was good, but this was just one of those things.  Margins are set on the basis of the tails, and tail events occur with some probability.

Given the nature of the tails in power prices (and spreads) reliance on a VaR-type model would be especially dangerous here.  Setting margin based on something like expected shortfall would likely be superior here.  Which model does Nasdaq use?

I can also see the possibility that Nasdaq’s margin model was faulty, and that Aas had figured this out.  He then put on trades that he knew were undermargined because Nasdaq’s model was defective, which allowed him to take on more risk than Nasdaq intended.

In my early work on clearing I indicted that this adverse selection problem was a concern in clearing, and would lead CCPs–and those who believe that CCPs make the financial system safer–to underestimate risk and be falsely complacent.  Indeed, I argued that one reason clearing could be a bad idea is that it was more vulnerable to adverse selection problems because the need to model the distribution of gains/losses on cleared positions requires detailed knowledge, especially for more exotic products.  Traders who specialize in these products are likely to have MUCH better understanding about risks than a non-specialist CCP.

Aas cleared for himself, and this has caused some to get the vapors and conclude that Nasdaq was negligent in allowing him to do so.  Self-clearing is just an FCM with a house account, but with no client business: in some respects that’s less risky than a traditional FCM with client business as well as its own trading book.

Nasdaq required Aas to have €70 million in capital to self-clear.  Presumably Nasdaq will get some of that capital in an insolvency proceeding, and use it to repay default fund members–meaning that the €114 million loss is likely an overestimate of the ultimate cost borne by Nasdaq and the clearing members.

Further, that’s probably similar to the amount of capital that an FCM would have had to have to carry a client position as big as Aas’.   That’s not inherently more risky (to the clearinghouse and its default fund) than if Aas had cleared through another firm (or firms).  Again, the issue is whether Nasdaq is assessing risks accurately so as to allow it to set clearing member capital appropriately.

But the point is that Aas had to have skin in the game to self-clear, just as an FCM would have had to clear for him.

Holding Aas’ positions constant, whether he cleared himself or through an FCM really only affected the distribution of losses, but not the magnitude.  If Aas had cleared through someone else, that someone else’s capital would have taken the hit, and the default fund would have been at risk only if that FCM had defaulted.  But the total loss suffered by FCMs would have been exactly the same, just distributed more unevenly.

Indeed, the more even distribution that occurred due to mutualization which spread the default loss among multiple FCMs might actually be preferable to having one FCM bear the brunt.

The real issue here is incentives.  My statement was that holding Aas’ positions constant, who he cleared through or whether he cleared at all affected only the distribution of losses.  Perhaps under different structures Aas might not have been able to take on this much risk.  But that’s an open question.

If he had cleared through another FCM, that FCM would have had an incentive to limit its positions because its capital was at risk.  But Aas’ capital was at risk–he had skin in the game too, and this was necessary for him to self-clear.  It’s by no means obvious that an FCM would have arrived at a different conclusion than Aas, and decided that his position represented a reasonable risk to its capital.

Here again a key issue is information asymmetry: would the FCM know more about the risk of Aas’ position, or less?  Given Aas’ allegedly obsessive behavior, and his long-time success as a trader, I’m pretty sure that Aas knew more about the risk than any FCM would have, and that requiring him to clear through another firm would not have necessarily constrained his position.  He would have also had an incentive to put his business at the dumbest FCM.

Another incentive issue is Nasdaq’s skin in the game–an issue that has exercised FCMs generally, not just on Nasdaq.  The exchange’s/CCP’s relatively thin contribution to the default fund arguably reduces its incentive to get its margin model right.  Evaluating whether Nasdaq’s relatively minor exposure to default risk led it to undermargin requires a more thorough analysis of its margin model, which is a very complex exercise which is impossible to do given what we know about the model.

But this all brings me back to themes I flogged to the collective shrug of many–indeed almost all–of the regulatory and legislative community back in the aftermath of the Crisis, when clearing was the silver bullet for future crises.   Clearing is all about the allocation and pricing of counterparty credit risk.  Evaluation of counterparty credit risk in a derivatives context requires a detailed understanding of the price risks of the cleared products, and dependencies between these price risks and the balance sheet risks of participants in cleared markets.  Classic information problems–adverse selection and moral hazard (too little skin in the game)–make risk sharing costly, and can lead to the mispricing of risk.

The forensics about Aas blowing up real good, and the lessons learned from that experience, should focus on those issues.  Alas, I see little recognition of that in the media coverage of the episode, and betting on form, I would wager that the same is true of regulators as well.

The Aas blow up should be a salutary lesson in how clearing really works, what it can do, and what it can’t.   Cynic that I am, I’m guessing that it won’t be.  And if I’m right, the next time could be far, far worse.

Print Friendly, PDF & Email

September 5, 2018

Nothing New Under the Sun, Ag Processing and Trading Edition

Filed under: Commodities,Economics,Politics,Regulation — cpirrong @ 2:30 pm

New Jersey senator Corey Booker has introduced legislation to impose “a temporary moratorium on mergers and acquisitions between large farm, food, and grocery companies, and establish a commission to strengthen antitrust enforcement in the agribusiness industry.”  Booker frets about concentration in the industry, noting that the four-firm concentration ratios in pork processing, beef processing, soybean crushing, and wet corn milling are upwards of 70 percent, and four major firms “control” 90 percent of the world grain trade.

My first reaction is: where has Booker been all these years?  This is hardly a new phenomenon.  Exactly a century ago–starting in 1918–in response to concerns about, well, concentration in the meat-packing industry, the Federal Trade Commission published a massive 6 volume study of the industry  The main theme was that the industry was controlled by five major firms.  A representative subject heading in this work is “[m]ethods of the five packers in controlling the meat-packing industry.”  “The five packers” is a recurring refrain.

The consolidation of the packing industry in the United States in the late-19th and early-20th centuries was a direct result of the communications revolution, notably the development of railroads and refrigeration technology that permitted the exploitation of economies of scale in packing.   The industry was not just concentrated in the sense of having a relatively small number of firms–it was geographically concentrated as well, with Chicago assuming a dominant role in the 1870s and later, largely supplanting earlier packing centers like Cincinnati (which at one time was referred to as “Porkopolis”).

In other words, concentration in meat-packing has been the rule for well over a century, and reflects economies of scale.

Personal aside: as a PhD student at Chicago, I was a beneficiary of the legacy of the packing kings of Chicago: I was the Oscar Mayer Fellow, and the fellowship paid my tuition and stipend.  My main regret: I never had a chance to drive the Weinermobile (which should have been a perk!).  My main source of relief: I never had to sing an adaption of the Oscar Mayer Weiner Song: “Oh I wish I were an Oscar Mayer Fellow, that’s what I really want to be.”

Back to the subject at hand!

Booker also frets about vertical integration, and this is indeed a difference between the 2018 meat industry and the 1918 version: as the Union Stockyards in Chicago attested–by the smell, if nothing else–the big packers did not operate their own feedlots, but bought livestock raised in the country and shipped to Chicago for processing.

I am a skeptic about market power-based explanations of vertical integration, and there is no robust economic theory that demonstrates that vertical integration is anti-competitive.  The models that show how vertical integration can be used to reduce competition tend to be highly stylized toys dependent on rather special assumptions, and hence are very fragile and don’t really shed much light on the phenomenon.

Transactions cost-based theories are much more plausible and empirically successful, and I would imagine that vertical integration in meat packing is driven by TCE considerations.  I haven’t delved into the subject, but I would guess that vertical integration enhances quality control and monitoring, and reduces the asymmetric information problems that are present in spot transactions, where a grower has better information about the quality of the cattle, and the care, feeding, and growing conditions than a buyer.

I’d also note that some of the other industries Booker mentions–notably bean and corn processing–have not seen upstream integration at all.

This variation in integration across different types of commodities suggests that transactional differences result in different organizational responses.  Grain and livestock are very different, and these likely give rise to different transactions costs for market vs. non-market transactions in the two sectors.  It is difficult to see how the potential for monopsony power differs across these sectors.

Insofar as the major grain traders are concerned, again–this is hardly news.  It was hardly news 40 years ago when Dan Morgan wrote Merchants of Grain.

Furthermore, Booker’s concerns seem rather quaint in light of the contraction of merchant margins, about which I’ve written a few posts.  Ironically, as my most recent ABCD post noted, downstream vertical integration by farmers into storage and logistics is a major driver of this trend.

To the extent that consolidation is in play in grains (and also in softs, such as sugar), it is a reflection of the industry’s travails, rather than driven by a drive to monopolize the industry.  Consolidation through merger is a time-tested method for squeezing out excess capacity in a static or declining industry.

Booker’s bill almost certainly has no chance of passage.  But it does reflect a common mindset in DC.  This is a mindset that is driven by simplistic understandings of the drivers of industrial structure, and is especially untainted by any familiarity with transactions cost economics and what it has to say about vertical integration.

Print Friendly, PDF & Email

August 20, 2018

Goodhart’s Law on Steroids, PCP, and Crack: Chinese GDP

Filed under: China,Commodities,Economics,Politics — cpirrong @ 6:46 pm

Goodhart’s Law states that if a measure becomes a target, it ceases being an informative measure.  If you want to see an illustration of Gooodhart’s Law in action on a humungous scale, just look at China.

Michael Pettis has a piece in Bloomberg which, in brief, says that China has a GDP target.   If it appears that the country will fall short of the target, local governments get the high sign to invest in infrastructure, construction, and the like.  Local governments control credit creation (by guaranteeing bank debts) so banks are willing to lend to finance this investment: further, frequently the government will jawbone banks, or will twiddle the knobs in the banking system (e.g., lowering reserve requirements) to get banks to supply the necessary funds.

The investments are guaranteed (though what revenue stream or assets back the guarantees Pettis doesn’t say, and there are reasons to doubt the value of these guarantees in a crunch).  Hence, banks never have to write down the debt even if the investments turn out to be junk, with a value far less than the cost incurred to create the underlying assets.

So basically, the Chinese government can produce any GDP number it wants.  Voila, apropos Goodhart, the GDP number is useless.

You’d like GDP to measure the value of goods and services (including investment goods) created.  Instead, in China on the fixed asset side in particular, it measures cost, which may bear little relationship to value when economic decisions are made according to the process that Pettis describes.  In market economies where banks and borrowers have hard budget constraints, investments that don’t pan out are written down, and the losses are deducted from income.  That doesn’t happen in China.

So what is national income in China?  I’d start with consumption, though even there due to issues with price indices/inflation measurement that may be overstated.  Then I’d add a constant X times reported fixed investment, where X<1.  Probably a lot less than 1, to take into account the fact that much investment has a cost that exceeds value.  Further, I’d deduct some fraction of accumulated past investment to reflect writedowns that should be made, but aren’t.

The focus of this analysis should be on determining X.  X should be a function of something related to estimated shortfall of GDP from target absent stimulus: the bigger the shortfall, the smaller X (because more bad investment is likely when the shortfall is big, as it’s then that the government encourages investment to make up the shortfall).  It could be a function of the increase in fixed asset investment, or construction investment, with a smaller X when investment in those categories shoots up.

A few other remarks.

First, it is stories like Pettis’ that convince me that modern China represents the most colossal misallocation of capital in history.

Second, it also makes me skeptical about Scott Sumner’s use of state-owned-enterprise (SOE) share of employment as a measure of centralized control of the economy. Most of the capital, and related employment, that results from the GDP targeting channel that Pettis analyzes flows through private firms.  The government controls/affects resource allocation via incentives given to local governments, which in turn incentivize banks and private firms to achieve the government objective.

Spitballing here, but I think a better measure would be something along the lines of the ratio of the volatility in fixed investment to the volatility in GDP.  Or maybe the ratio of the volatility in credit creation to the volatility of GDP.  Chinese GDP volatility, especially post-crisis, is laughably low.  The channel that Pettis identifies stabilizes GDP (reducing its volatility) by changing investment/credit creation in response to changing economic conditions (thereby increasing its volatility).  The only problem with this measure is that there is a real risk it will become infinite.

In (relatively) market-oriented economies, investment is the most volatile component of GDP, so the ratio I propose would be positive in those economies.  But that could serve as a market economy benchmark against which to compare China.  I’m guessing that China’s ratio would be substantially larger.

Third, when looking at the demand for commodities, the potential for shortfalls of economic performance from government target should be decisive.  These shortfalls induce the turning of the credit spigot which juices the demand for commodities.

In sum, what matters in China is not whether or not GDP hits the target–it will! The question is what the government has to do to hit it.

Print Friendly, PDF & Email

August 16, 2018

Why ABCD Sing the Blues, Part II: Increased Farm Scale Leads to Greater Competition in Capacity and Less Monopsony Power

Filed under: Commodities,Derivatives,Economics,Politics,Regulation — cpirrong @ 6:34 pm

In “Why Are ABCD Singing the Blues?” I called bull on the claim that ag trading firms were suffering through a rough period because of big crops and low prices.  I instead surmised that gains in capacity, in storage and throughput facilities, had outstripped growth in the amount of grain handled, and that this was pressuring margins.  In yesterday’s WSJ, Jacob Bunge (no relation, apparently, to the grain trading family) had a long and dense article that presents a lot of anecdotal support for that view.  The piece also provides other information that allows me to supplement and expand on it.

In a nutshell, due to increased economies of scale in farming, farms have grown larger.  Many farms have grown to the point that they can achieve efficient scale in storage and logistics to warrant investment in storage facilities and trucks, and thus can vertically integrate into the functions traditionally performed by Cargill and the others.  This has led to an expansion in storage capacity and logistical capacity overall, which has reduced the derived demand for the storage and logistics assets owned and operated by the ABCDs.  Jacob’s article presents a striking example of an Illinois farmer that bought a storage facility from Cargill.

In brief, more integrated farms have invested in capacity that competes with the facilities owned by Cargill, ADM, Bunge, and smaller firms in the industry.  No wonder their profits have fallen.

The other thing that the article illustrates is that scale plus cheaper communication costs have reduced the monopsony power of the grain merchants.  The operation of the farmer profiled in the piece is so large that many merchants, including some from a distance away, are competing for his business.  Furthermore, the ability to store his own production gives the farmer the luxury of time to sell: he doesn’t have to sell at harvest time to the local elevator at whatever price the latter offers–which was historically low-balled due to the cost of hauling to a more distant elevator.  Choosing the time to sell gives the farmer the value of the optionality inherent in storage–and the traditional merchant loses that option.  Further, more time allows the farmer to seek out and negotiate better deals from a wider variety of players.

The traditional country market for grain can be modeled well as a simple spatial economy with fixed costs (the costs of building/operating an elevator).  Fixed costs limit the number of elevators, and transportation costs between spatially separated elevators gave each elevator some market power in its vicinity: more technically, transportation costs meant that the supply of grain to a country elevator was upward sloping, with the nearby farms willing to sell at lower prices than more distant ones closer to competing elevators.  This gave the elevators monopsony power.  (And no doubt, competition was limited even in multi-elevator towns, because the conditions for tacit collusion were ripe.)

Once upon a time, the monopsony power of elevator operators was a hot-button political issue.  One impetus for the farm cooperative movement was to counteract the monopsony power of the line elevator operators.  The middlemen didn’t like this one bit, and that was the reason that they excluded cooperatives from membership of futures exchanges, like the Chicago Board of Trade: this exclusion raised cooperatives’ costs, and was effectively a raising-rivals-cost strategy.  Brokers also supported excluding cooperatives because as members cooperatives could have circumvented broker commission cartels (i.e., the official, exchange-approved and enforced minimum commission rates).  This is why the Commodity Exchange Act contains this language:

No board of trade which has been designated or registered as a contract market or a derivatives transaction execution facility exclude  from membership in, and all privileges on, such board of trade, any association or corporation engaged in cash commodity business having adequate financial responsibility which is organized under the cooperative laws of any State, or which has been recognized as a cooperative association of producers by the United States Government or by any agency thereof, if such association or corporation complies and agrees to comply with such terms and conditions as are or may be imposed lawfully upon other members of such board, and as are or may be imposed lawfully upon a cooperative association of producers engaged in cash commodity business, unless such board of trade is authorized by the commission to exclude such association or corporation from membership and privileges after hearing held upon at least three days’ notice subsequent to the filing of complaint by the board of trade.

Put differently, in the old days the efficient scale of farms was small relative to the efficient scale of midstream assets, so farmers had to cooperate in order to circumvent merchant monopsony power.  Cooperation was hampered by incentive problems and the political nature of cooperative governance.  (See Henry Hansmann’s Ownership of Enterprise for a nice discussion.) The dramatic increase in the efficient scale of farms now means (as the WSJ article shows) that many farmers have operations as large as the efficient scale of some midstream assets, so can circumvent monopsony power through integration.  This pressures merchants; margins.

Jacob Bunge is to be congratulated for not imitating the laziness of most of those who have “reported” on the grain merchant blues, where by “reporting” I mean regurgitating the conventional wisdom that they picked up from some other lazy journalist.  He went out into the field–literally–and shed a good deal of light on what’s really going on.  And what’s going on is competition and entry, driven in large part by economic and technological forces that have increased the efficient scale of grain and oilseed production.  Thus, the grain handlers are in large part indirect victims of technological change, even though the technology of their business has remained static by comparison.

 

Print Friendly, PDF & Email

August 1, 2018

This Is My Shocked Face: Blockchain Hype Is Fading Fast

Filed under: Blockchain,Commodities,Cryptocurrency,Economics — cpirrong @ 7:02 pm

Imagine my great surprise at reading a Bloomberg piece titled: “Blockchain, Once Seen as a Corporate Cure-All, Suffers Slowdown.

That was sarcasm, by the way.  I’ve long and publicly expressed my skepticism that blockchain will have revolutionary effects, at least in the near to medium term.  In my public speaking on the topic, I’ve explored the implications of three basic observations.  First, that blockchain is basically a way of sharing/communicating information, which can in turn be put to various uses.  Second, there alternative ways of sharing/communicating information, with different costs and benefits.  And third, it is necessary to distinguish between sharing information within an organization and between organizations.

Much of the hype about blockchain relates to the potential benefits of more efficient sharing and validation of information.  But this does not address the issue of whether blockchain does this more efficiently than alternative means of sharing/communicating/validating.  As in all institutional/technology issues, a comparison of alternatives is necessary.  This comparison has been sadly lacking in public discussions of the potential for blockchain, beyond incantations about blockchain eliminating the need for trusted third parties which is (a) often untrue (in part because trusted parties may be required to enter information into a blockchain, and (b) is not necessarily a feature, because trusted third parties may be able to operate more efficiently than consensus based systems employed on a blockchain.

The most developed implementation of blockchain (Bitcoin) involves very large cost to solve a particular problem that (a) is unique to cryptocurrency, and (b) is not necessarily important in other contexts–namely, the double spend problem in crypto.  Maybe blockchain is the best way to solve that particular problem (which itself begs the question of whether cryptocurrency`is an efficient solution to any economic problem), but that doesn’t mean that it will be a more efficient way of solving the myriad types of opportunism, fraud, and deceit that plague other kinds of transactions.  Double spend is not the alpha and omega of transactional challenges.  Indeed, it might be one of the most trivial.

Thinking in Williamsonian transaction cost terms, where the transaction is the unit of analysis, transactions are highly diverse.  Different kinds of transactions are vulnerable to different kinds of information and opportunism problems, meaning that customized blockchain approaches are likely necessary.  One likely cause for the waning enthusiasm mentioned in the Bloomberg article is that people are coming to the recognition that customization is not easy, and it may not be worth the candle, compared to other ways of addressing the same issues.  Relatedly, customization makes it harder to exploit scale economies, and recognition of this is likely to be making initially enthusiastic commercial users less keen on the idea: that is, it may be possible to use blockchain in many settings, but it may not be cost-effective to do so.

The siloed vs. cooperative divide is also likely to be extremely important, and the Bloomberg article mentions that issue a couple of times.  The blockchain initiatives that do seem to have been implemented, at least to some degree, as with Maersk in container shipping or Cargill with turkeys, are intra-firm endeavors that do not require coordination and cooperation across firms, and can exploit the governance structure that a firm has in place.  Many of the other proposed uses–for instance, in trade finance, or in commodity trading, both of which require myriad parties in a single transaction to communicate information among one another–are inherently multilateral.

This creates all sorts of challenges.  How can commercial rivals cooperate?  How are the gains from cooperation divided?–this is a problem even when participants supply complementary services, such as a trading firm, banks providing trade finance, and the buyer and seller of a commodity.  As oil unitization has shown, battles over dividing the gains from cooperation can dissipate much of those gains.  Who gets to see what information?  Who makes the rules?  How?  How are they enforced? What is the governance structure?  How is free riding prevented?  Who pays?

Ironically, where the gains from cooperation are seemingly biggest–where there are large numbers of potential participants–is exactly where the problems of coordination, negotiation, and agreement are likely to be most daunting.

I’ve drawn the analogy between these cooperative blockchain endeavors and commodity exchanges, which (as I showed in a 1995 JLS paper) were formed primarily as ways to reduce transactions costs via cooperative rule making and enforcement.  The old paper shows that exchanges faced serious obstacles in achieving the gains from cooperation, and often failed to do so.  Don’t expect blockchain to be any different, especially given the greater complexity of the transactional problems that it is being proposed as a fix.

Thus, I am not surprised to read things like this:

“The expectation was we’d quickly find use cases,” Magnus Haglind, Nasdaq’s senior vice president and head of product management for market technology, said in an interview. “But introducing new technologies requires broad collaboration with industry participants, and it all takes time.”

or this:

Most blockchains also can’t yet handle a large volume of transactions — a must-have for major corporations. And they only shine in certain types of use cases, typically where companies collaborate on projects. But because different businesses have to share the same blockchain, it can be a challenge to agree on technology and how to adopt it.

One of my favorite illustrations of the hype outstripping the reality is the endeavor launched with much fanfare in the cotton market, where IBM and The Seam announced an endeavor to use the blockchain to revolutionize the cotton supply chain.   It’s been almost two years, and after the initial press releases, it’s devilish hard to find any mention of the project, let alone any indication that it will go into operation anytime soon.

Read the Bloomberg article and you’ll have a better understanding of R3’s announcement of an IPO–and that they might have missed their opportunity.

In 2017 and a little before, Blockchain was a brand new shiny hammer.  People have been looking everywhere for nails to pound with it, and spending a lot of money in the effort.  But they’re finding that many transactional problems aren’t nails, that there are other hammers that might do the job better, and there are other problems that require many parties to agree on just how the hammer is to be used and by whom.  Given this, it is not surprising that the euphoria is fading fast.  The main question that remains is in what shrunken domain will blockchain actually be employed, and when.  My guess is that the domain will be relatively small, and the time until employment will be pretty long.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress