Streetwise Professor

October 18, 2018

Ticked Off About Spoofing? Consider This

Filed under: Commodities,Derivatives,Economics,Exchanges,Politics,Regulation — cpirrong @ 6:51 pm

An email from a legal academic in response to yesterday’s post spurred a few additional thoughts re spoofing.

One of my theories of spoofing is that is a way to improve one’s position in the queue at the best bid or offer.  Why does one stand in a queue?  Why does one want to be closer to the front?

Simple: because there is a rent there to capture.  Where does the rent come from?  When what you are queuing for is underpriced, likely due to some price control.  Think of gas lines, or queues for sausage in the USSR.

In market making, the rent exists because the benefit from executing at the bid or offer exceeds the cost.  The cost arises from (a) adverse selection costs, and (b) inventory cost/risk and other costs of participation.  What is the source of the price control?: the tick size.

Exchanges set a minimum price increment–the “tick.”  When the tick size exceeds the costs of making a market, there is a rent.  This makes it beneficial to increase the probability of execution of an at-the-market limit order, i.e., if the tick size exceeds the cost of executing a passive order, it pays to game to move up in the queue.  Spoofing is one way of gaming.

This has a variety of implications.

One implication is in the cross section: spoofing should be more prevalent, when the non-adverse selection component of the spread (which is measured by temporary price movements in response to trades) is large.  Relatedly, this implies that spoofing should be more likely, the more negatively autocorrelated are transaction prices, i.e., the bigger the bid-ask bounce.

Another implication is in the time series.  Adverse selection costs can vary over time.  Spoofing should be more prevalent during periods when adverse selection costs are low.  These should also be periods of unusually large negative autocorrelations in transaction prices.

Another implication is that if you want to reduce spoofing  . . .  reduce the tick size.  Given what I just discussed, tick size reductions should be focused on instruments with a bigger bid/ask bounce/larger non-adverse selection driven spread component.

That is, why police the markets and throw people in jail?  Mitigate the problem by reducing the incentive to commit the offense.

This story also has implications for the political economy of spoofing prosecution (which was the main thrust of the email I received).  HFT/algo traders who desire to capture the rent created by a tick>adverse selection cost should complain the loudest about spoofing–and are most likely to drop the dime on spoofers.  Casual empiricism supports at least the first of these predictions.

That is, as my correspondent suggested to me, not only are spoofing prosecutions driven by ambitious prosecutors looking for easy and unsympathetic targets, they generate political support from potentially politically influential firms.

One way to test this theory would be to cut tick sizes–and see who squeals the loudest.  Three guesses as to whom this might be, and the first two don’t count.

Print Friendly, PDF & Email

October 17, 2018

The Harm of a Spoof: $60 Million? More Like $10 Thousand

Filed under: Commodities,Derivatives,Economics,Exchanges,Regulation — cpirrong @ 4:08 pm

My eyes popped out when I read this statement regarding the DOJ’s recent criminal indictment (which resulted in some guilty pleas) for spoofing in the S&P 500 futures market:

Market participants that traded futures contracts in these three markets while the spoof orders distorted market prices incurred market losses of over $60 million.

$60 million in market losses–big number! For spoofing! How did they come up with that?

The answer is embarrassing, and actually rather disgusting.

The DOJ simply calculated the notional value of the contracts that were traded pursuant to the alleged spoofing scheme.  They took the S&P 500 futures price (e.g., 1804.50), multiplied that by the dollar value of a price point ($50), and multiplied that by the “approximate number of fraudulent orders placed” (e.g., 400).

So the defendants traded futures contracts with a notional value of approximately $60+ million.  For the DOJ to say that anyone “incurred market losses of over $60 million” based on this calculation is complete and utter bollocks.  Indeed, if someone touted that their trading system earned market profits of $60 million based on such a calculation in order to get business from the gullible, I daresay the DOJ and SEC would prosecute them for fraud.

This exaggeration is of a piece with the Sarao indictment, which claimed that his spoofing caused the Flash Crash.

And of course the financial press credulously regurgitated the number the DOJ put out.

I know why DOJ does this–it makes the crime look big and important, and likely matters in sentencing.  But quite frankly, it is a lie to claim that this number accurately represents in any way, shape, or form the economic harm caused by spoofing.

This gets to the entire issue of who is damaged by spoofing, and how.  Does spoofing induce someone to cross the spread and incur the bid/ask, who would otherwise not have entered an aggressive order?  Does it cause someone to cancel a limit order, and therefore lose the opportunity to trade against an aggressive order and thereby earn the spread (the realized spread, not the quoted spread, in order to account for losses to better-informed traders)?

Those are realistic theories of harm, and they imply that the economic harm per contract is on the order of a tick in a liquid market like the ES.  That is, per contract executed as a result of the spoof, the damage is .25 (the tick size) times $50 (the value of an S&P point).  That is, a whopping $12.50.  So, pace the DOJ, the ~800 “fraudulent orders placed caused economic harm of about 10,000 bucks, not 60 mil.  Maybe $20,000, under the theory that in a particular spoof, someone lost from crossing the spread, and someone else lost out on the opportunity to earn the spread.  (Though interestingly, from a social perspective, that is a transfer not a true loss.)

But $10,000 or $20,000 looks rather pathetic, compared to say $60 million, doesn’t it?  What’s three orders of magnitude between friends, eh?

Yes, maybe the DOJ just included a few episodes in the indictment, because that is sufficient for a criminal prosecution and conviction.  But even a lot more of such episodes does not add up to a lot of money.

This is precisely why I find the expenditure of substantial resources to prosecute spoofing to be so dubious.  There is other financial market wrongdoing that is far more harmful, which often escapes prosecution.  Furthermore, efficient punishment should be sized to the harm.  People pay huge fines, and go to jail–for years–for spoofing.  That punishment is hugely disproportionate to the loss, under the theory of harm that I advance here.  So spoofing is over-deterred.

Perhaps there are other theories of harm that justify the severe punishments for spoofing.  If so, I’d like to hear them–I haven’t yet.

These spoofing prosecutions appear to be a case of the drunk looking for his wallet (or a scalp) under the lamppost, because the light is better there.  In the electronic trading era, spoofing is possible–and relatively cheap to detect ex post.  So just trawl through the trading data for evidence of spoofing, and voila!–a criminal prosecution is likely to appear.  A lot easier than prosecuting market power manipulations that can cause nine and ten figure market losses.  (For an example of the DOJ’s haplessness in a prosecution of that kind of case, see US v. Radley.)

Spoofing is the kind of activity that is well within the competence of exchanges to detect and punish using their ordinary disciplinary procedures.  There’s no need to make a federal case out of it–literally.

The time should fit the crime.  The Department of Justice wildly exaggerates the crime of spoofing in order to rationalize the time.  This is inefficient, and well, just plain unjust.

Print Friendly, PDF & Email

September 24, 2018

The SEC Commissioner’s Just So Story That Just Ain’t So

Filed under: Derivatives,Economics,Exchanges,Regulation — cpirrong @ 7:06 pm

SEC Commissioner Robert J. Jackson is getting a lot of attention for a policy speech he gave at George Mason University last week.  Alas, Commissioner Jackson betrays only a dim understanding of current stock markets and stock market history.  Indeed, perhaps the best summary of his speech would be the Artemis Ward quip: “It ain’t so much the things we don’t know that get us into trouble, it’s the things we do know that just ain’t so.”

Mr. Jackson has a just-so story that, well, just ain’t so.  In his story, once upon a time US stock markets were faithful guardians of the public interest.  Then, the SEC let them become for-profit firms, and it all went wrong:

Given power and a profit motive, even the most storied institutions will do what they must to maximize their wealth. And nowhere has this been more true than in our stock markets.

For over a century, exchanges were collectively owned not-for-profits, overseeing and organizing trading in America’s best-known companies. But about a decade ago, exchanges became private corporations, designed—perhaps even obligated—to maximize profits. Yet we at the SEC have far too often continued to treat the exchanges with the same kid gloves we applied to their not-for-profit ancestors. The result is that, even while one our fundamental mandates is to encourage competition, the SEC has stood on the sidelines while enormous market power has become concentrated in just a few players. That’s a key reason why among our 13 public stock exchanges, 12 are owned by just three corporations. And that’s how the stock exchanges that are a symbol of American capitalism have developed puzzling practices that look nothing like the competitive marketplaces investors deserve.

. . .

First, one might wonder how our stock markets got here. The answer is that stock exchanges have been better at extracting rents than regulators have been at stopping them. As you all know, in 1934, the Nation struck a bargain with our stock exchanges: the Commission was created to oversee the markets, and in turn the exchanges were given wide latitude in organizing their affairs. For generations, this system served investors well. But then the world changed, and the SEC allowed exchanges to become for-profit corporations with both regulatory and profit-seeking mandates.

At the time, the Commission didn’t sufficiently contemplate the effects that decision might have; we simply said that we saw no reason to think that exchanges couldn’t play the role of regulator and pursue profit at the same time. Maybe we were wrong. Whatever one thinks about the benefits or drawbacks of those events, we should all agree that for-profit companies can be counted on to do one thing: pursue profit. And in for-profit hands, SEC oversight designed for not-for-profit exchanges can be dangerous.

Where to begin?

Well, I guess I should begin by saying for probably the billionth time (here’s one of them) that stock markets were not non-profits out of some charitable motive, or to ensure that they acted in the public interest by self-regulating markets free of conflict of interest and mercenary motive.  In fact, stock exchanges (and derivatives exchanges) adopted the not-for-profit form to protect the rents of their members.  Furthermore, the exchanges self-regulated in ways that maximized the profits of their members: it is beyond a joke to say that exchanges are better at extracting rents today than during the halcyon non-profit years.  Non-profit exchanges just extracted rents in different ways, and the rents did not flow through the exchange coffers.  These different ways included naked collusion–which the SEC tolerated for years, kid gloves indeed!–as well as entry restrictions (the number of members remaining fixed since the 19th century) and various rules advantaging intermediaries (especially specialists, but also brokers).

As for conflicts of interest–they were rife in Commissioner Jackson’s good old days.  The exchanges, as agents for their intermediary member-owners, had structural conflicts with the investing public.

Mr. Jackson argues that “modern exchanges tax ordinary investors.”  The implicit claim is that old time exchanges didn’t.  Ha! They just did it in different ways, and arguably levied far greater taxes then than now.

Why were the taxes arguably greater then?  The answer relates to another fundamental error in Jackson’s just so story: “enormous market power has become concentrated in just a few players. That’s a key reason why among our 13 public stock exchanges, 12 are owned by just three corporations.”  Er, prior to RegNMS, a little over a decade ago, and for the entire life of the SEC prior to that time, and prior to the formation of the SEC, the NYSE had a far more dominant position than any exchange does today.  Due to network effects, it basically had a lock on order flow for its listings.  Its market share was routinely above 85 percent, and that other 15 percent was basically cream skimming competition that the SEC only grudgingly accepted.

Again, the NYSE did not capture rents from this market power by charging higher prices and passing the revenues through to owners in the form of dividends.  But through broker cartels, and after the SEC finally bestirred itself to end the broker cartels, through entry limits and rules that advantaged members, it permitted its members to earn rents by charging higher prices for their services.

Indeed, the great benefit of RegNMS is that it undermined the liquidity network effect that largely immunized the NYSE against competition, and unleashed competition for order flow unprecedented in the history of US stock markets–or stock markets anywhere, for that matter.  Three (granting arguendo that 3 rather than 13 is the right number) is a helluva lot more competitive than one.

But Commissioner Jackson cannot see the glass is at least 90 percent full: he frets over the 10 percent (or less) that is empty.  He laments “fragmentation.”

Yes.  As I have written, the “fragmentation” (aka “competition”) that has occurred post-RegNMS has its costs–some of which are the result of problematic features in RegNMS.  Others are inherent in any multi-market system.  Fragmentation creates arbitrage opportunities that some participants capture through spending real resources: this is probably socially wasteful.  Commissioner Jackson notes that these opportunities exist in part due to the lack of incentive of exchanges to invest in the public data feed: well, I’ve noted this public goods problem in the past (note the date–almost 5 years ago).  Yes, some have information advantages due now mainly to speed: well, back in the day, people on the floor had information advantages–and speed advantages–due to their proximity to where price discovery was taking place.  Take it as a law: there will always be a class of traders with information, access and speed advantages over the hoi polloi.

Some of these problems could be remedied by better regulation.  But despite the deficiencies of RegNMS, there is no doubt that it made US equity markets far more competitive, and that this has redounded to the benefit of ordinary investors–and pretty much the entire buy side, including institutions.  RegNMS dramatically reduced the “tax” that stock markets levied on investors, not increased it as Mr. Jackson apparently believes.

Commissioner Jackson questions whether the limited exposure to lawsuits that exchanges currently enjoy is justified.  That is a legitimate question, but Mr. Jackson’s motivation for asking it is completely off-base.  His fixation on for-profit again shines through: “Finally, we should take a hard look at whether it makes sense to allow for-profit exchanges to write the rules of the game for their customers and competitors while also enjoying immunity from civil liability.”  Mr. Jackson: it is equally questionable whether it makes sense “to allow non-profit exchanges to write the rules of the game for their customers and competitors while also enjoying immunity from civil liability.”

Commissioner Jackson also questions pricing practices: “Finally, SEC and FINRA rules for best execution have clearly left open opportunities for conflicts of interest that hurt investors. The reason is that exchanges offer controversial payments—they call them rebates—to brokers based on the volume of customer orders that broker sends to that exchange.”  This is a form of price competition.  Yes, there are agency issues involved here, but if anything these rebates reduce the rents that exchanges earn that exercise Commissioner Jackson so greatly.  Perhaps brokers don’t pass 100 percent of the rebates to their customers–but this is a distributive issue not an efficiency one, and competition between brokers mitigates this problem.

Perhaps in the category of “rebates” Commissioner Jackson is including maker-taker payments. But the interpretation of these payments–and the more prosaic order flow incentives Mr. Jackson describes–is greatly complicated by the fact that exchanges are multi-sided platforms.  It is well-known that the pricing policies of multi-sided platforms often involve cross-subsidies among customer groups (e.g., liquidity suppliers and liquidity demanders), and that these pricing strategies can be economically efficient.

US securities market structure could certainly be improved.  But reasonable improvements must be grounded in a reasonable understanding of the economics of exchanges.  Alas, one individual responsible for improving market structure is clearly operating from a seriously defective understanding. Commissioner Jackson’s bugbear–for-profit exchanges–have to a first approximation nothing to do with whatever ails US markets.  He pines for an era that not only never existed, but which was in reality worse on almost every dimension that he criticizes modern markets for–competition, rent seeking, and conflicts of interest.

The SEC actually performed a public service–something not to be taken for granted for a public agency!–by breaking the liquidity network effect and opening stock markets to competition through the adoption of RegNMS.  Tweak RegNMS to improve market performance, Commissioner Jackson, rather than advocating proposals based on just so stories that just ain’t–and weren’t–so.

Print Friendly, PDF & Email

September 20, 2018

The Smoke is Starting to Clear from the Aas/Nasdaq Blowup

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — cpirrong @ 11:08 am

Amir Khwaja of Clarus has a very informative post about the Nasdaq electricity blow-up.

The most important point: Nasdaq uses SPAN to calculate IM.  SPAN was a major innovation back in the day, but it is VERY long in the tooth now (2018 is its 30th birthday!).  Moreover, the most problematic part of SPAN is the ad hoc way it handles dependence risk:

  • Intra-commodity spreading parameters – rates and rules for evaluating risk among portfolios of closely related products, for example products with particular patterns of calendar spreads
  • Inter-commodity spreading parameters – rates and rules for evaluating risk offsets between related product

…..

CME SPAN Methodology Combined Commodity Evaluations

The CME SPAN methodology divides the instruments in each portfolio into groupings called combined commodities. Each combined commodity represents all instruments on the same ultimate underlying – for example, all futures and all options ultimately related to the S&P 500 index.

For each combined commodity in the portfolio, the CME SPAN methodology evaluates the risk factors described above, and then takes the sum of the scan risk, the intra-commodity spread charge, and the delivery risk, before subtracting the inter-commodity spread credit. The CME SPAN methodology next compares the resulting value with the short option minimum; whichever value is larger is called the CME SPAN methodology risk requirement. The resulting values across the portfolio are then converted to a common currency and summed to yield the total risk for the portfolio.

I would not be surprised if the handling of Nordic-German spread risk was woefully inadequate to capture the true risk exposure.  Electricity spreads are strange beasts, and “rules for evaluating risk offsets” are unlikely to capture this strangeness correctly especially given the fact that electricity markets have idiosyncrasies that one-size-fits all rules are unlikely to capture.  I also conjecture that Aas knew this, and loaded the boat with this spread trade because he knew that the risk was grossly underpriced.

There are reports that the Nasdaq margin breach at the time of default (based on mark-to-market prices) was not nearly as large as the €140 million hit to the default fund.  In these accounts, the bulk of the hit was due to the fact that the price at which Aas’ portfolio was auctioned off included a substantial haircut to prevailing market prices.

Back in the day, I argued that one of the real advantages to central clearing was a more orderly handling of defaulted portfolios than the devil-take-the-hindmost process in OTC bilateral markets (cf., the outcome of the LTCM disaster almost exactly 20 years ago–with the Fed midwifed deal being completed on 23 September, 1998). (Ironically spread trades were the cause of LTCM’s demise too.)

But the devil is in the details of the auction, and in market conditions at the time of the default–which are almost certainly unsettled, hence the default.  The CME was criticized for its auction of the defaulted Lehman positions: the bankruptcy trustee argued that the price CME obtained was too low, thereby harming the creditors.   The sell-off of the Amaranth NG positions in September, 2006 (what is it about September?!?) to JP Morgan and Citadel (if memory serves) was also at a huge discount.

Nasdaq has been criticized for allowing only 4 firms to bid: narrow participation was also the criticism leveled at CME and NYMEX clearing in the Lehman and Amaranth episodes, respectively.  Nasdaq argues that telling the world could have sparked panic.

But this episode, like Lehman and Amaranth before it, demonstrate the challenges to auctioning big positions.  Only a small number of market participants are likely to have the capital, or the risk appetite, to take on a big defaulted position in its entirety.  Thus, limited participation is almost inevitable, and even if Nasdaq had invited more bidders, there is room to doubt whether the fifth or sixth or seventh bidder would have been able to compete seriously with the four who actually participated.  Those who have the capital and risk appetite to bid seriously for big positions will almost certainly demand a big discount to  compensate for the risk of holding the position until they can work it off.  Moreover, limited participation limits competition, which should exacerbate the underpricing problem.

Thus, even with a structured auction process, disposing of a big defaulted portfolio is almost inevitably something of a fire sale.  This is a risk borne by the participants in the default fund.  Although the exposure via the default fund is sometimes argued to be an incentive for the default fund participants to bid aggressively, this is unlikely because there are externalities: the aggressive bidder bears all the risks and costs, and provides benefits to the rest of the other members.  Free riding is a big problem.

In theory, equitizing the risk might improve outcomes.  By selling shares in the defaulted portfolio, no single or two bidders would have to absorb the entire position and risk could be spread more efficiently: this could reduce the risk discount in the price.  But who would manage the portfolio?  What are the mechanics of contributing to IM and VM?  Would it be like a bad bank, existing as a zombie until the positions rolled off?

Another follow-up from my previous post relates to the issue of self-clearing.  On Twitter and elsewhere, some have suggested that clearing through a 3d party would have been an additional check.  Surely an FCM would be less likely to fall in love with a position than the trader who puts it on, but the effectiveness of the FCM as a check depends on its evaluation of risk, and it may be no smarter than the CCP that sets margins.   Furthermore, there are examples of FCMs having the same trade in their house account as one of their big customers–perhaps because they think the client is really smart and they want to free ride off his genius.  As a historical example, Griffin Trading had a big trade in the same instrument and direction as its biggest client.  The trade went pear-shaped, the client defaulted, and Griffin did too.

I also need to look to see whether Nasdaq Commodities uses the US futures clearing model, which does not segregate positions.  If it does, and if Aas had cleared through an FCM, it is possible that the FCM’s clients could have lost money as a result of his default.  This model has fellow-customer risk: by clearing for himself, Aas did not create such a risk.

I also note that the desire to expand clearing post-Crisis has made it difficult and more costly for firms to find FCMs.  This problem has been exacerbated by the Supplementary Leverage Ratio.  Perhaps the cost of clearing through an FCM appeared excessive to Aas, relative to the alternative of self-clearing.  Thus, if regulators blanch at the thought of self-clearing (not saying that they should), they should get serious about addressing the FCM cost issue, and regulations that inflate these costs but generate little offsetting benefit.

Again, this episode should spark (no pun intended!) a more thorough reconsideration of clearing generally.  The inherent limitations of margin models, especially for more complex products or markets.  The adverse selection problems that crude risk models can create.  The challenges of auctioning defaulted portfolios, and the likelihood that the auctions will become fire sales.  The FCM capacity issue.

The supersizing of clearing in the post-Crisis world has also supersized all of these concerns.  The Aas blowup demonstrates all of them.  Will CCPs and regulators take heed? Or will some future September bring us the mother of all blowups?

Print Friendly, PDF & Email

September 18, 2018

He Blowed Up Real Good. And Inflicted Some Collateral Damage to Boot

I’m on my way back from my annual teaching sojourn in Geneva, plus a day in the Netherlands for a speaking engagement.  While I was taking that European non-quite-vacation, a Norwegian power trader, Einar Aas, suffered a massive loss in cleared spread trades between Nordic and German electricity.  The loss was so large that it blew through Aas’ initial margin and default fund contribution to the clearinghouse (Nasdaq), consumed Nasdaq’s €7 million capital contribution to the default fund, and €107 million of the rest of the default fund–a mere 66 percent of the fund.  The members have been ordered to contribute €100 million to top up the fund.

This was bound to happen. In a way, it was good that it happened in a relatively small market.  But it provides a sobering demonstration of what I’ve said for years: clearing doesn’t eliminate losses, but affects the distribution of losses.  Further, financial institutions that back CCPs–the members–are the ultimate backstops.  Thus, clearing does not eliminate contagion or interconnections in the financial network: it just changes the topology of the network, and the channels by which losses can hit the balance sheets of big players.

Happening in the Nordic/European power markets, this is an interesting curiosity.  If it happens in the interest rate or equity markets, it could be a disaster.

We actually know very little about what happened, beyond the broad details.  We know Aas was long Nordic power and short German power, and that the spread widened due to wet weather in Norway (which depresses the price of hydro and reduces demand) and an increase in European prices due to increases in CO2 prices.  But Nasdaq trades daily, weekly, monthly, quarterly, and annual power products: we don’t know which blew up Aas.  Daily spreads are more volatile, and exhibit more extremes (kurtosis), but since margins are scaled to risk (at least theoretically–more on this below) what matters is the market move relative to the estimated risk.  Reports indicate that the spread moved 17x the typical move, but we don’t know what measure of “typical” is used here.  Standard deviation?  Not a very good measure when there is a lot of kurtosis (or skewness).

I also haven’t seen how big Aas’ initial margins were.  The total loss he suffered was bigger than the hit taken by the default fund, because under the loser-pays model, the initial margins would have been in the first loss position.

The big question in my mind relates to Nasdaq’s margin model.  Power price distributions deviate substantially from the Gaussian, and estimating those distributions is challenging in part because they are also conditional on day of the year and hour of the day, and on fundamental supply-demand conditions: one model doesn’t fit every day, every hour, every season, or every weather enviornment.  Moreover, a spread trade has correlation risk–dependence risk would be a better word, given that correlation is a linear measure of dependence and dependencies in power prices are not linear.  How did Nasdaq model this dependence and how did that impact margins?

One possibility is that Nasdaq’s risk/margin model was good, but this was just one of those things.  Margins are set on the basis of the tails, and tail events occur with some probability.

Given the nature of the tails in power prices (and spreads) reliance on a VaR-type model would be especially dangerous here.  Setting margin based on something like expected shortfall would likely be superior here.  Which model does Nasdaq use?

I can also see the possibility that Nasdaq’s margin model was faulty, and that Aas had figured this out.  He then put on trades that he knew were undermargined because Nasdaq’s model was defective, which allowed him to take on more risk than Nasdaq intended.

In my early work on clearing I indicted that this adverse selection problem was a concern in clearing, and would lead CCPs–and those who believe that CCPs make the financial system safer–to underestimate risk and be falsely complacent.  Indeed, I argued that one reason clearing could be a bad idea is that it was more vulnerable to adverse selection problems because the need to model the distribution of gains/losses on cleared positions requires detailed knowledge, especially for more exotic products.  Traders who specialize in these products are likely to have MUCH better understanding about risks than a non-specialist CCP.

Aas cleared for himself, and this has caused some to get the vapors and conclude that Nasdaq was negligent in allowing him to do so.  Self-clearing is just an FCM with a house account, but with no client business: in some respects that’s less risky than a traditional FCM with client business as well as its own trading book.

Nasdaq required Aas to have €70 million in capital to self-clear.  Presumably Nasdaq will get some of that capital in an insolvency proceeding, and use it to repay default fund members–meaning that the €114 million loss is likely an overestimate of the ultimate cost borne by Nasdaq and the clearing members.

Further, that’s probably similar to the amount of capital that an FCM would have had to have to carry a client position as big as Aas’.   That’s not inherently more risky (to the clearinghouse and its default fund) than if Aas had cleared through another firm (or firms).  Again, the issue is whether Nasdaq is assessing risks accurately so as to allow it to set clearing member capital appropriately.

But the point is that Aas had to have skin in the game to self-clear, just as an FCM would have had to clear for him.

Holding Aas’ positions constant, whether he cleared himself or through an FCM really only affected the distribution of losses, but not the magnitude.  If Aas had cleared through someone else, that someone else’s capital would have taken the hit, and the default fund would have been at risk only if that FCM had defaulted.  But the total loss suffered by FCMs would have been exactly the same, just distributed more unevenly.

Indeed, the more even distribution that occurred due to mutualization which spread the default loss among multiple FCMs might actually be preferable to having one FCM bear the brunt.

The real issue here is incentives.  My statement was that holding Aas’ positions constant, who he cleared through or whether he cleared at all affected only the distribution of losses.  Perhaps under different structures Aas might not have been able to take on this much risk.  But that’s an open question.

If he had cleared through another FCM, that FCM would have had an incentive to limit its positions because its capital was at risk.  But Aas’ capital was at risk–he had skin in the game too, and this was necessary for him to self-clear.  It’s by no means obvious that an FCM would have arrived at a different conclusion than Aas, and decided that his position represented a reasonable risk to its capital.

Here again a key issue is information asymmetry: would the FCM know more about the risk of Aas’ position, or less?  Given Aas’ allegedly obsessive behavior, and his long-time success as a trader, I’m pretty sure that Aas knew more about the risk than any FCM would have, and that requiring him to clear through another firm would not have necessarily constrained his position.  He would have also had an incentive to put his business at the dumbest FCM.

Another incentive issue is Nasdaq’s skin in the game–an issue that has exercised FCMs generally, not just on Nasdaq.  The exchange’s/CCP’s relatively thin contribution to the default fund arguably reduces its incentive to get its margin model right.  Evaluating whether Nasdaq’s relatively minor exposure to default risk led it to undermargin requires a more thorough analysis of its margin model, which is a very complex exercise which is impossible to do given what we know about the model.

But this all brings me back to themes I flogged to the collective shrug of many–indeed almost all–of the regulatory and legislative community back in the aftermath of the Crisis, when clearing was the silver bullet for future crises.   Clearing is all about the allocation and pricing of counterparty credit risk.  Evaluation of counterparty credit risk in a derivatives context requires a detailed understanding of the price risks of the cleared products, and dependencies between these price risks and the balance sheet risks of participants in cleared markets.  Classic information problems–adverse selection and moral hazard (too little skin in the game)–make risk sharing costly, and can lead to the mispricing of risk.

The forensics about Aas blowing up real good, and the lessons learned from that experience, should focus on those issues.  Alas, I see little recognition of that in the media coverage of the episode, and betting on form, I would wager that the same is true of regulators as well.

The Aas blow up should be a salutary lesson in how clearing really works, what it can do, and what it can’t.   Cynic that I am, I’m guessing that it won’t be.  And if I’m right, the next time could be far, far worse.

Print Friendly, PDF & Email

July 16, 2018

Oil Spreads Go Non-Linear (Due to Infrastructure Constraints), To the Chagrin of Many Traders: The Pirrong Commodity Catechism in Action

Filed under: Commodities,Economics,Energy,Exchanges — cpirrong @ 3:59 pm

When I wrote about the demise of GEM Trading a few weeks ago, I hypothesized that sharp movements in various spreads had been its undoing.  A story in Reuters says that GEM was not the only firm rocked by these changes.  Big boys–including BP, Vitol, Trafigura, and Gunvor–have also suffered, and the losses have caused traders their jobs at Gunvor and BP:

The world’s biggest oil traders are counting hefty losses after a surprise doubling in the price discount of U.S. light crude to benchmark Brent WTCLc1-LCOc1 in just a month, as surging U.S production upends the market.

Trading desks of oil major BP and merchants Vitol , Gunvor and Trafigura have recorded losses in the tens of millions of dollars each as a result of the “whipsaw” move when the spread reached more than $11.50 a barrel in June, insiders familiar with their performance told Reuters.

The sources did not give precise figures for the losses, but they said they were enough for Gunvor and BP to fire at least one trader each.

The story goes on to say that binding infrastructure constraints are to blame, which is certainly the case.  But implicit in the article is a theme that I have emphasized for literally years (I recall incorporating this into my class lectures in about 2004).  Specifically, bottlenecks imply that marginal transformation costs (e.g., marginal costs of transporting oil between Cushing and the GOM) tend to rise very steeply when capacity constraints are reached.  That is, when you are operating at say 90 percent of capacity, variations in utilization have little impact on marginal transformation costs, but going from 95 to 96 can cause costs to explode, and basically go vertical as capacity is reached.

This has an implication for spreads.  Another part of the Pirrong Commodities Catechism is that spreads equal marginal transformation costs, and are essentially the shadow prices on constraints.  The behavior of marginal transformation costs therefore has implications for spreads: in particular, spreads can be very stable despite variations in the utilization of transformation assets, but as utilization nears capacity, the spreads become much more volatile.  Moreover, and relatedly, small changes in fundamentals can lead to big moves in spreads when constraints start to bind.  The relationship between fundamentals and spreads is non-linear as capacity constraints become binding, and well, here spreads have gone non-linear, to the chagrin of many traders.

Put differently, spread trades aren’t always “widowmakers” (as the article calls them)–sometimes they are quite safe and boring.  But when bottlenecks begin to bind, they can become deadly.

There is one odd statement in the article:

“As the exporter of U.S. crude, traders are naturally long WTI and hedge their bets by shorting Brent. When the spreads widen so wildly, you lose money,” said a top executive with one of the four trading firms.

Well, why would you hedge WTI risk with Brent?  You could hedge your WTI inventory by selling . . . WTI futures.  The choice to “hedge” WTI by selling Brent is effectively a choice to speculate on the spread.  That brings to mind the old Holbrook Working adage that hedging is speculation on the basis.  The difference here is that most, say, country grain elevators about which Working was mainly writing had no choice in hedging instrument (at least not in liquid ones), and perforce had to live with basis risk if they wanted to eliminate flat price risk.  Here, BP and Gunvor and the rest had the choice between two liquid instruments, and if the “top executive’s” statement is correct, deliberately chose the one that exposed them to greater spread (basis) risk.

So this isn’t an example of “sometimes stuff happens when you hedge.”  The firms chose to expose themselves to a particular risk.  They took a punt on the spread, which was effectively a punt that infrastructure constraints would ease.  They lost.

In my 2014 white paper on commodity trading firms (sponsored by Trafigura, ironically) I noted that to the extent that they speculate, commodity trading firms tend to speculate on the spreads, rather than flat prices, because that’s where they have something of an information advantage.  But as this episode shows, that advantage does not immunize them against risk.

This also makes me wonder about the risk models that the firms use, which in turn affect the sizes of positions traders can put on, and where they put them on.  I, er, speculate that these risk models don’t take into account the non-linearity of spread risk.  If that’s true, traders would have been able to put on bigger positions than they would have been had the risk models accurately reflected those risks, and further, that they were incentivized to do these trades because the risk was underpriced.

All in all, an interesting casebook study of commodity trading–what can go wrong, and why.

Correction: Andrew Gowers, head of corporate affairs at Trafigura says in the comments that (a) Trafigura did not suffer a loss, and (b) the company had told this to Reuters prior to the publication of the article.  I have contacted the editor of the story for an explanation.

Print Friendly, PDF & Email

July 13, 2018

Blockchain Wunderkinds: Solving Peripheral Problems, Missing the Big Picture

Filed under: Blockchain,Cryptocurrency,Economics,Exchanges — cpirrong @ 7:52 pm

Ethereum wunderkind Vitalik Buterin delivered a rant against centralized crypto exchanges:

“I definitely personally hope centralized exchanges burn in hell as much as possible,” Buterin said speaking to TechCrunch.

When bitcoin, the original cryptocurrency, was founded in 2008 by the anonymous Satoshi Nakomoto, the point was to create a decentralized financial future that renders middlemen useless. Nearly 10 years later, the centralized exchanges — those folks sitting in the middle of buyers and sellers — are among the most powerful players in the market for digital currencies such as ethereum and bitcoin.

Bloomberg News estimates they brought in $3 million a day last year. And exchanges such as Gemini and Coinbase are expanding at a clip, bringing on talent from Wall Street.

“It’s hard to ignore the irony that an asset created to allow decentralization is currently almost completely traded on centralized exchanges,” Peter Johnson, a vice president at Jump Ventures, said in an interview. Buterin, however, wants the crypto community to focus more on decentralization so that cryptos can more frequently trade peer-to-peer. Buterin’s remarks come as so-called decentralized exchange gain more attention.

Like many of his arrogant ilk, Buterin ignores the lesson of Chesterton’s fence: why does this thing you do not like and do not understand exist?

Yes, blockchain and cryptocurrencies allow peer-to-peer transactions.  They were largely designed to facilitate such transactions.  For some, the motivation is ideological: an anarchic belief in radical decentralization, and a deep distrust of centralized institutions.

But just because blockchain and related technologies reduce the costs of peer-to-peer transactions, doesn’t mean that such transactions are cheaper than centralized trading on exchanges.  Transacting requires finding a counterparty.  It requires negotiating a price (for a standardized thing, like a Bitcoin–negotiations of other terms for more complex things).  Negotiating a price is costly when information about value is diffuse, so in a decentralized setting not only is it necessary to search for counterparties, it is advantageous to search for information about prices to (a) find the best price, and (b) to be able to negotiate with better information about value .

Centralization reduces the cost of finding a counterparty.  It enhances competition, which tends to reduce bargaining costs.  It leads to better and more symmetric information about prices, which also tends to reduce bargaining costs.  Further, centralized markets can support specialized intermediaries–market makers–who specialize in smoothing out idiosyncratic temporal imbalances in buy and sell order flow, which further reduces trading costs.

Because of these features, centralized trading is frequently an emergent outcome of individual decisions, and one that economizes on transactions costs.  This is clearly what is happening in crypto world.  Indeed, the main puzzle at present is why there are so many exchanges.  The centripetal forces of liquidity will likely result in a huge consolidation in this space.

Buterin and others are attempting to find ways of mitigating some of the disadvantages of bilateral trading (bilateral just being another, more conventional, way of saying “peer-to-peer”).  Reducing the ways of finding people who want to take the other side of a transaction, for example.  But I am highly skeptical that these measures will overcome the inherent advantages of centralizing trading of homogeneous things that large numbers of people want to buy and sell pretty much 24/7, to the point that peer-to-peer will supplant centralized trading.  Buterin can rant all he wants, but centralization is here to stay, and if anything, this segment of the market will become more centralized.

Buterin’s error is seemingly the opposite of those who bewail the lack of centralization in some markets, e.g., those who want to make swaps trading more centralized and who rail against bilateral OTC transactions, but it is really the same mistake. Those who see too much centralization in some markets, and those who see too little in others, fail to recognize that trading mechanisms are emergent orders that develop diverse niches to accommodate the fact that transactors and transactions are heterogeneous.  Centralization is efficient for some transactors and transactions: bilateral/OTC for others.  That’s why we see both.

(This is a point I made at a Platts blockchain conference in November, BTW.  The theme of my talk was where decentralization can work, and where it is likely inefficient.  Trading of standardized instruments was one of the main cases I discussed.)

Alas, the ignorance of techno-geniuses is not limited to trading mechanisms.  One of the supposed benefits of blockchain that is that it allows the ownership of anything–a painting, a house, you name it–to be divided into shares, with the fractional interests recorded in an immutable register, and traded peer-to-peer.  That is, block chain facilitates equitization of assets.  A breakthrough!

Uhm, not really. The benefits of equitizing assets and risks has been long, long understood by economists.  In particular, it has long been understood that equitization facilitates more efficient risk sharing.

But long ago, economists also recognized that despite these apparent benefits, in fact very few assets and risks are equitized.  A vast literature has come up with explanations why.  Information and incentive problems–moral hazard and adverse selection–are notable among these.  A prosaic example: If I sell off shares in my car, what incentive do I have to maintain it properly and to economize on wear and tear and to reduce the probability of theft?  Who pays for maintenance? Who decides on what maintenance is needed?  When I sell the shares, I am likely to have better information about the value and condition of the vehicle, which would subject the buyers to an adverse selection problem, meaning that I am likely to get a low price for the shares–so why bother selling them?  There are other transactions cost problems associated with measurement (who verifies exactly what the asset is?) and opportunism and governance and control.   Related to the centralized trading point, if an asset is highly idiosyncratic/unstandardized, the desire to trade fractional shares will be small.

A potentially slightly cheaper way of recording and transferring fractional ownership does not address these far, far, far more fundamental impediments to equitizing (or should I say, “tokenizing”?) assets and risks.  But the coder geniuses miss the forest for the trees.  They see the issue that their technology can address, and think that it will be revolutionary, only because they do not understand the broader economic issues in play, and therefore think everyone born before them or who does not code must be an idiot.

No, not really.  They are looking at the capillaries, and missing the heart, veins, and arteries.

It reminds me of the Mark Twain quote: “When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished at how much he had learned in seven years.”  Except seven years haven’t passed for the Buterins of the world, and frankly, I seriously doubt that they will.  Instead, they inhabit a techno-Groundhog Day.

All of this is symptomatic of blockchain hype and froth.  There is an indication that we have reached peak hype.  R3, a bank-led blockchain consortium, is contemplating an IPO.  To me this is a signal that those on the inside of blockchain development, especially in the area where its benefits have been particularly hyped (finance/payments/settlement/fintech) understand that the reality will never match the hyperbole, so it’s best to sell out while hyperbole reigns supreme.  (Yes, they claim that they are being approached by those looking to buy the whole thing, but take that with a big grain of salt–I view it merely as part of the sales pitch.  “This is a hot little property right now.  Better get in before someone snatches it away.”)

In brief: don’t be the greater fool.

I think that blockchain and DLT will have some viable commercial applications.  But I am highly confident that they will not be nearly as revolutionary as the True Believers claim.  This is in large part due to the fact that it is clear that the True Believers have an extremely narrow, blinkered understanding of the broader economic issues associated with transacting, ownership, risk transfer, incentives, and governance.  Blockchain may address some issues, but many–if not most–of these issues are secondary or tertiary, not fundamental.  Some things are done more efficiently in a centralized fashion–the trading of standardized instruments being one.  Some things are not equitized/tokenized not because it is technically infeasible/prohibitively costly to issue and record fractional interests, but because fractional ownership entails substantial incentive and information problems.

So don’t believe the hype.  And take a pass on those R3 shares, if they do come to market.

Addendum: the dominance of crypto exchanges is even more remarkable, given how they, well, pretty much suck.  They are hardly comparable to modern futures or equities trading exchanges.  Yet people still strongly prefer to trade on rather clunky platforms with major potential security issues where you can’t easily convert digital into fiat currency and which are likely rife with manipulation than peer-to-peer.  That tells you something.

Print Friendly, PDF & Email

June 28, 2018

A Tarnished GEM: A Casualty of Regulation, Spread Explosions, or Both?

Filed under: Clearing,Commodities,Derivatives,Economics,Energy,Exchanges,Regulation — The Professor @ 6:28 pm

Geneva Energy Markets LLC, a large independent oil market maker, has been shuttered.  Bloomberg and the FT have stories on GEM’s demise.  The Bloomberg piece primarily communicates the firm’s official explanation: the imposition of the Basel III leverage ratio on GEM’s clearer raised the FCM’s capital requirement, and it responded by forcing GEM to reduce its positions sharply.  The FT story contains the same explanation, but adds this: “Geneva Energy Markets, which traded between 50m and 100m barrels a day of oil, has sold its trading book after taking ‘significant losses’ in oil futures and options, a person close to the company said.”

These stories are of course not mutually exclusive, and the timing of the announcement that the firm is shutting down months after it had already been ordered to reduce positions suggests a way of reconciling them. Specifically, the firm had suffered loss that made it impossible to support even its shrunken positions.

The timing is consistent with this.  GEM is primarily a spread trader, and oil spreads have gone crazy lately.  In particular, spread position short nearby WTI has been killed in recent days due to the closure of Canadian oil sands production and the relentless exports of US oil.  The fall in supply and continued strong demand have led to a rapid fall in oil stocks, especially at Cushing.  This has been accompanied (as theory says it should be!) by a spike in the WTI backwardation, and a rise in the WTI-Brent differential (and other quality spreads with a WTI leg).  If GEM was short the calendar spread, or had a position in quality spreads that went pear-shaped with the explosion in WTI, it could have taken a big hit.  Or at least a big enough hit to make it unviable to continue to operate at a profitable scale.

Here’s a cautionary tale.  Stop me if you’ve heard it before:

“The notional value of our book was in excess of $50 billion,” Vonderheide said. “However, the actual risk of the book was always relatively low, with at value-at-risk at around $2 million at any given time.”

If I had a dollar for every time that I’ve heard/read “No worries! Our VaR is really low!” only to have the firm fold (or survive a big loss) I would be livin’ large.  VaR works.  Until it doesn’t.  At best, it tells you the minimum loss you can suffer with a certain probability: it doesn’t tell you how much worse than that it can get.  This is why VaR is being replaced or supplemented with other measures that give a better measure of downside risk (e.g., expected shortfall).

I would agree, however, with GEM managing partner Mark Vonderheide (whom I know slightly):

“The new regulation is seriously damaging the liquidity in the energy market,” Vonderheide said. “If the regulation was intending to create a safer and more efficient market, it has done completely the opposite.”

It makes it costlier to make markets, which erodes market liquidity, thereby making it costlier for firms to hedge, and more difficult to enter and exit positions.  Liquidity reductions resulting from this type of regulation tend to be most acute during periods of high volatility–which can exacerbate the volatility, perversely.  Moreover, like much of Frankendodd and its foreign fellow monsters, it tends to hit small to medium sized firms worse than bigger ones, and thereby contributes to greater concentration in the markets–exactly the opposite of the stated purpose.

As Reagan said: “The most terrifying words in the English language are: I’m from the government and I’m here to help.” Just ask GEM about that.

Print Friendly, PDF & Email

May 8, 2018

Libor Was a Crappy Wrench. Here–Use This Beautiful New Hammer Instead!

Filed under: Derivatives,Economics,Exchanges,Financial crisis,Regulation — The Professor @ 8:02 pm

When discussing the 1864 election, Lincoln mused that it was unwise to swap horses in midstream.  (Lincoln used a variant of this phrase many times during the campaign.) The New York Fed and the Board of Governors are proposing to do that nonetheless when it comes to interest rates.  They want to transition from reliance on Libor to a new Secured Overnight Financing Rate (SOFR, because you can never have enough acronyms), despite the fact that there are trillions of dollars of notional in outstanding derivatives and more trillions in loans with payments tied to Libor.

There are at least two issues here.  The first is if Libor fades away, dies, or is murdered, what is to be done with the outstanding contracts that it is written into? Renegotiations of contracts (even if possible) would be intense, costly, and protracted, because any adjustment to contracts to replace Libor could result in the transfer of tens of billions of dollars among the parties to these contracts.  This is particularly like because of the stark differences between Libor and SOFR.  How would you value the difference between a stream of cash flows based on a flawed mechanism intended to reflect term rates on unsecured borrowings with a stream of cash flows based on overnight secured borrowings?  Apples to oranges doesn’t come close to describing the difference.

Seriously: how would you determine the value so that you could adjust contracts?  A conventional answer is to hold some sort of auction (such as that used to determine CDS payoffs in a default), and then settle all outstanding contracts based on the clearing price in the auction (again like a CDS auction).  But I can’t see how that would work here.

Let’s say you have a contract entitling you to receive a set of payoffs tied to Libor.  You participate in an auction where you bid an amount that you would be willing to pay/receive to give up that set of payoffs for a set of SOFR payoffs.  What would you bid?  Well, in a conventional auction your bid would be based on the value of holding onto the item you would give up (here, the Libor payments).  But if Libor is going to go away, how would you determine that opportunity cost?

Not to mention that there is an immense variety of payoff formulae based on Libor, meaning that there would have to be an immense variety of (impractical) auctions.

So it will come down to bruising negotiations, which given the amounts at stake, would consume large amounts of real resources.

The second issue is whether the SOFR rate will perform the same function as well as Libor did.  Market participants always had the choice to use some other rate to determine floating rates in swaps–T-bill rates, O/N repo rates, what have you.  They settled on Libor pretty quickly because Libor hedged the risks that swap users faced better than the alternatives.  A creditworthy bank that borrowed unsecured for 1, 3, 6, or 12 month terms could hedge its funding costs pretty well by using a Libor-based swap: a swap based on some alternative (like an O/N secured rate) would have been a dirtier hedge.  Similarly, another way that banks hedged interest rate risk was to lend at rates tied to their funding cost–which varied closely with Libor.  Well, the borrowers (e.g., corporates) could swap those floating rate loans into fixed by using Libor-based swaps.

That is, Libor-based swaps and other derivatives came to dominate because they were better hedges for interest rate risks faced by banks and corporates than alternatives would have been.  There was an element of reflexivity here too: the availability of Libor-based hedging instruments made it desirable to enter into borrowing and lending transactions based on Libor, because you could hedge them. This positive feedback mechanism created the vexing situation faced today, where there are immense sums of contracts that embed Libor in one way or another.

SOFR will not have this desirable feature–unless the Fed wants to drive banks to do all their funding secured overnight! That is, there will be a mismatch between the new rate that is intended replace Libor as a benchmark in derivatives and loan transactions, and the risks that that market participants want to hedge.

In essence, the Fed identified the problem with Libor–its vulnerability to manipulation because it was not based on transactions–and says that it has fixed it by creating a benchmark based on a lot of transactions.  The problem is that the benchmark that is “better” in some respects (less vulnerable to a certain kind of manipulation) is worse in others (matching the risk that market participants want to hedge).  In a near obsessive quest to fix one flaw, the Fed totally overlooked the purpose of the thing that they were trying to fix, and have created something of dubious utility because it does a poorer job of achieving that purpose.  In focusing on the details of the construction of the benchmark, they’ve lost sight of the big picture: what the benchmark is supposed to be used for.

It’s like the Fed has said: “Libor was one crappy wrench, so we’ve gone out and created this beautiful hammer. Use that instead!”

Or, to reprise an old standby, the Fed is like the drunk looking for his car keys under the lamppost, not because he lost them there, but because the light is better.  There is more light (transactions) in the O/N secured market, but that’s not where the market’s hedging keys are.

This is an object lesson in how governments and other large bureaucracies go astray.  The details of a particular problem receive outsized attention, and all efforts are focused on fixing that problem without considering the larger context, and the potential unintended consequences of the “fix.” Government is especially vulnerable to this given the tendency to focus on scandal and controversy and the inevitable narrative simplification and decontextualization that scandal creates.

The current ‘bor administrator–ICE–is striving to keep it alive.  These efforts deserve support.  Secured overnight rate-based benchmarks are ill-suited to serve as the basis for interest rate derivatives that are used to hedge the transactions that Libor-based derivatives do.

Print Friendly, PDF & Email

May 1, 2018

Cuckoo for Cocoa Puffs: Round Up the Usual Suspects

Filed under: Commodities,Derivatives,Economics,Exchanges — The Professor @ 10:39 am

Journalism on financial markets generally, and commodity markets in particular, often resorts to rounding up the usual suspects to explain anomalous price movements.  Nowadays, the usual suspect in commodity markets is computerized/algorithmic/high frequency trading.  For example, some time back HFT was blamed for higher volatility in the cattle market, even though such trading represents a smaller fraction of cattle trading than it does for other contracts, and especially since there is precious little in the way of a theoretical argument that would support such a connection.

Another case in point: a flipping of the relationship between London and New York cocoa prices is being blamed on computerized traders.

Computers are dominating the trading of cocoa in New York, sparking a dramatic divergence in the longstanding price relationship with the London market.

Speculative funds have driven the price of the commodity in New York up more than 50 per cent since the start of the year to just under $3,000 a tonne. The New York market, traded in dollars, has traditionally been the preferred market for financial players such as hedge funds.

The London market, historically favoured by traders and commercial players buying and selling physical cocoa, has only risen 34 per cent in the same timeframe.

The big shift triggered by the New York buying is that its benchmark, which normally trades at a discount to London, now sits at a record premium.

So, is the NY premium unjustified by physical market price relationships?  If so, that would be like hundred dollar bills lying on the sidewalk–and someone would pick them up, right?

Not according to this article:

The pronounced shift in price relationships comes as hedge fund managers with physical trading capabilities and merchant traders have exited the cocoa market.

In the past, such a large price difference would have encouraged a trader to buy physical cocoa in London and send it to New York, hence narrowing the relationship. However, current price movements reflected the absence of such players, said brokers.

Fewer does not mean zero.  Cargill, or Olam, or Barry Callebaut or Ecom and a handful of other traders certainly have the ability to execute a simple physical arb if one existed.  Indeed, given the recent trying times in physical commodity trading, such firms would be ravenous to exploit such opportunities.

What’s even more bizarre is that pairs/spread/convergence trading is about the most vanilla (not chocolate!) type of algorithmic trade there is, and indeed, has long been a staple of algorithmic firms that trade only paper.  Meaning that if the spread between this pair of closely related contracts was out of line, if physical traders didn’t bring it back into line, it would be the computerized traders who would.  Yes, there are some complexities here–different delivery locations, different currencies, different deliverable growths with different price differentials, different clearinghouses–but those are exactly the kinds of things that are amenable to systematic–and computerized–analysis.

Weirdly, the article recognizes this

Others use algorithms that exploit the shifts in price relationships between different markets or separate contracts of the same commodity. [Emphasis added.  I should mention that cocoa is one of the few examples of a commodity with separate active contracts for the same commodity.]

It then fails to grasp the implications of this.

One “authority” cited in the article is–get this–Anthony Ward of Armajaro infamy:

Anthony Ward, the commodities trader known in the cocoa market for his large bets, has been among the more well-known fund managers to close his hedge fund, exiting the market at the end of last year. Mr Ward, dubbed “Chocfinger” due to his influence over the cocoa price, blamed the rising power of algorithmic and systems-based trading for making position-taking based on “fundamental” supply and demand factors more difficult.

Methinks that the market isn’t treating Anthony well, and like many losing traders, can’t take the blame himself so he’s looking for a scapegoat. (I note that Ward sold out Armajaro’s cocoa trading business to Ecom for the grand sum of $1 in December, 2013.)

I am skeptical enough that computerized trading can distort flat prices, but those arguments are harder to refute because of the knowledge problem: the whole reason markets exist is that no one knows the “right” price, hence disagreements are inevitable.  But when it comes to something as basic as an intracommodity spread, I find allegations of computer-driven distortions completely implausible.  You can’t arb flat price distortions, but you can arb distorted spreads, and that business is the bread and butter for commodity traders.

So: release the suspect!

PS. For my Geneva students looking for a topic for a class paper, this would be ideal. Perform an analysis to explain the flipping of the spread.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress