Streetwise Professor

April 12, 2014

Yes, Brad, It’s Just You (And Others Who Oversimplify and Ignore Salient Facts)

Filed under: Derivatives,Economics,Exchanges,HFT,Politics,Regulation,Uncategorized — The Professor @ 2:48 pm

Brad DeLong takes issue with my Predator/Prey HFT post. He criticizes me for not taking a stand on HFT, and for not concluding that HFT should be banned because it is a parasitic. Color me unpersuaded. De Long’s analysis is seriously incomplete, and some of his conclusions are incorrect.

At root, this is a dispute about the social benefits of informed trading. De Long takes the view that there is too little informed trading:

In a “rational” financial market without noise traders in which liquidity, rebalancing, and control/incentive traders can tag their trades, it is impossible to make money via (4). Counterparties to (4) will ask the American question: If this is a good trade for you, how can it be a good trade for me? The answer: it cannot be. And so the economy underestimates in fundamental information, and markets will be inefficient–prices will be away from fundamentals, and so bad real economic decisions will be made based on prices that are not in fact the appropriate Lagrangian-multiplier shadow values–because of free riding on the information contained in informed order flow and visible market prices. [Note to Brad: I quote completely, without extensive ellipses. Pixels are free.]

Free riding on the information in prices leading to underinvestment in information is indeed a potential problem. And I am quite familiar with this issue, thank you very much. I used similar logic in my ’94 JLE paper on self-regulation by exchanges to argue that exchanges may exert too little effort to deter manipulation because they didn’t internalize the benefits of reducing the price distortions caused by corners. My ’92 JLS paper applied this reasoning to an evaluation of exchange rules regarding the disclosure of information about the quantity and quality of grain in store. It’s a legitimate argument.

But it’s not the only argument relating to the incentives to collect information, and the social benefits and costs and private benefits and costs of trading on that information. My post focused on something that De Long ignores altogether, and certainly did not respond to: the possibility that privately informed trading can be rent seeking activity that dissipates resources.

This is not a new idea either. Jack Hirshleifer wrote a famous paper about it over 40 years ago. Hirsleifer emphasizes that trading on information has distributive effects, and that people have an incentive to invest real resources in order to distribute wealth in their direction. The term rent seeking wasn’t even coined then (Ann Kreuger first used it in 1974) but that is exactly what Hirshleifer described.

The example I have in my post is related to such rent seeking behavior. Collecting information that allows a superior forecast of corporate earnings shortly before an announcement can permit profitable trading, but (as in one of Hirshleifer’s examples) does not affect decisions on any margin. The cost of collecting this information is therefore a social waste.

De Long says that the idea that there is too little informed trading “does not seem to me to scan.” If it doesn’t it is because he has ignored important strands of the literature dating back to the early-1970s.

Both the free riding effects and the rent seeking effects of informed trading certainly exist in the real world. Too little of some information is collected, and too much of other types is collected. And that was basically my point: due to the nature of information, true costs and benefits aren’t internalized, and as a result, evaluating the welfare effects of informed trading and things that affect the amount of informed trading is impossible.

One of the things that affects the incentives to engage in informed trading is market microstructure, and in particular the strategies followed by market makers and how those strategies depend on technology, market rules, and regulation. Since many HFT are engaging in market making, HFT affects the incentives surrounding informed trading. My post focused on how HFT reduced adverse selection costs-losses to informed traders-by ferreting out informed order flow. This reduces the losses to informed traders, which is the same as saying it reduces the gains to informed traders. Thus there is less informed trading of all varieties: good, bad, and ugly.

Again the effects of this are equivocal, precisely because the effects of informed trading are equivocal. To the extent that rent seeking informed trading is reduced, any reduction in adverse selection cost is an unmitigated gain. However, even if collection of some decision improving information is eliminated, reducing adverse selection costs has some offsetting benefits. De Long even mentions the sources of the benefits, but doesn’t trace through the logic to the appropriate conclusion.

Specifically, De Long notes that by trading people can improve the allocation of risk and mitigate agency costs. These trades are not undertaken to profit on information, and they are generally welfare-enhancing. By creating adverse selection, informed trading-even trading that improves price informativeness in ways that leads to better real investment decisions-raises the cost of these welfare-improving risk shifting trades. Just as adverse selection in insurance markets leads to under provision of insurance (relative to the first best), adverse selection in equity or derivatives markets leads to a sub optimally small amount of hedging, diversification, etc.

So again, things are complicated. Reducing adverse selection costs through more efficient market making may involve a trade-off between improved risk sharing and better decisions involving investment, etc., because prices are more informative. Contrary to De Long, who denies the existence of such a trade off.

And this was the entire point of my post. That evaluating the welfare effects of market making innovations that mitigate adverse selection is extremely difficult. This shouldn’t be news to a good economist: it has long been known that asymmetric information bedevils welfare analysis in myriad ways.

De Long can reach his anti-HFT conclusion only by concluding that the net social benefits of privately informed trading are positive, and by ignoring the fact that any kind of privately informed trading serves as a tax on beneficial risk sharing transactions. To play turnabout (which is fair!): there is “insufficient proof” for the first proposition. And he is flatly wrong to ignore the second consideration. Indeed, it is rather shocking that he does so.*

Although De Long concludes an HFT ban would be welfare-improving, his arguments are not logically limited to HFT alone. They basically apply to any market making activity. Market makers employ real resources to do things to mitigate adverse selection costs. This reduces the amount of informed trading. In De Long’s world, this is an unmitigated bad.

So, if he is logical De Long should also want to ban all exchanges in which intermediaries make markets. He should also want to ban OTC market making. Locals were bad. Specialists were bad. Dealers were bad. Off with their heads!

Which raises the question: why has every set of institutions for trading financial instruments that has existed everywhere and always had specialized intermediaries who make markets? The burden of proof would seem to be on De Long to demonstrate that such a ubiquitous practice has been able to survive despite its allegedly obvious inefficiencies.

This relates to a point I’ve made time and again. HFT is NOT unique. It is just the manifestation, in a particular technological environment, of economic forces that have expressed/manifested themselves in different ways under different technologies. Everything that HFT firms do-market making, arbitrage activities, and even some predatory actions (e.g., momentum ignition)-have direct analogs in every financial trading system known to mankind. HFT market makers basically put into code what resides in the grey matter of locals on the floor. Arbitrage is arbitrage. Gunning the stops is gunning the stops, regardless of whether it is done on the floor or on a computer.

One implication of this is that even if HFT is banned, it is inevitable-inevitable-that some alternative way of performing the same functions would arise. And this alternative would pose all of the same conundrums and complexities and ambiguities as HFT.

In sum, Brad De Long reaches strong conclusions because he vastly oversimplifies. He ignores that some informed trading is rent seeking, and that there can be a trade-off between more informative prices (and higher adverse selection costs) and risk sharing.

The complexities and trade-offs are exactly why debates over speculation and market structure have been so fierce, and so protracted. There are no easy answers. This isn’t like a debate over tariffs, where answers are much more clean-cut. Welfare analyses are always devilish hard when there is asymmetric information.

Although a free-market guy, I acknowledge such difficulties, even though that means that implies that I know the outcome is not first best. Brad De Long, not a free market guy, well, not so much. So yes, Brad, it is just you-and other people who oversimplify and ignore salient considerations that are present in any set of mechanisms for trading financial instruments, regardless of the technology.

* De Long incorrectly asserts that informed trading cannot occur in the absence of “noise trading,” where from the context De Long defines noise traders as randomizing idiots: “In a ‘rational’ financial market without noise traders in which liquidity, rebalancing, and control/incentive traders can tag their trades, it is impossible to make money via [informed trading].” Noise trading (e.g., in a Kyle model) is a modeling artifice that treats “liquidity, rebalancing and control/incentive” trades-trades that are not information-driven-in a reduced form fashion.  Randomizing idiots don’t trade on information. But neither do rational portfolio diversifiers subject to endowment shocks.

It is possible-and has been done many, many times-to produce a structural model with, say, rebalancing traders subject to random endowment shocks who trade even though they lose systematically to informed traders. (De Long qualifies his statement by referring to traders who can “tag their trades.” No idea what this means. Regardless, completely rational individuals who benefit from trading because it improves their risk exposure (e.g., by permitting diversification) will trade even though they are subject to adverse selection.) They will trade less, however, which is the crucial point, and which is a cost of informed trading, regardless of whether that informed trading improves other decisions, or is purely rent-seeking.

 

Print Friendly

April 2, 2014

Michael Lewis’s HFT Book: More of a Dark Market Than a Lit One

Filed under: Derivatives,Economics,Exchanges,HFT,Politics,Regulation,Uncategorized — The Professor @ 2:35 pm

Michael Lewis’s new book on HFT, Flash Boys, has been released, and has unleashed a huge controversy. Or put more accurately, it has added fuel to a controversy that has been burning for some time.

I have bought the book, but haven’t had time to read it. But I read a variety of accounts of what is in the book, so I can make a few comments based on that.

First, as many have pointed out, although this has been framed as evil computer geniuses taking money from small investors, this isn’t at all the case. If anyone benefits from the tightening of spreads, especially for small trade sizes, it is small investors. Many of them (most, in fact) trade at the bid-ask midpoint via internalization programs with their brokers or through payment-for-order-flow arrangements. (Those raise other issues for another day, but have been around for years and don’t relate directly to HFT.)

Instead, the battle is mainly part of the struggle between large institutional investors and HFT. Large traders want to conceal their trading intentions to avoid price impact. Other traders from time immemorial have attempted to determine those trading intentions, and profit by trading before and against the institutional traders.  Nowadays, some HFT traders attempt to sniff out institutional orders, and profit from that information.  Information about order flow is the lifeblood of those who make markets.

This relates to the second issue. This has been characterized as “front running.” This terminology is problematic in this context. Front running is usually used to describe a broker in an agency relationship with a customer trading in advance of the customer’s order, or disclosing the order to another trader who then trades on that information. This is a violation of the agency relationship between the client and the broker.

In contrast, HFT firms use a variety of means-pinging dark pools, accessing trading and quoting information that is more extensive and obtained more quickly than via the public data feeds-to detect the presence of institutional orders. They are not in an agency relationship with the institution, and have no legal obligation to it.

And this is nothing new. Traders on the floor were always trying to figure out when big orders were coming, and who was submitting them. Sometimes they obtained this information when they shouldn’t have, because a broker violated his obligation. But usually it was from watching what brokers were trading, knowing what brokers served what customers, looking at how anxious the broker appeared, etc.  To throw the floor of the track, big traders would use many brokers. Indeed, one argument for dual trading was that it made it harder for the floor to know the origin of an order if the executing broker dual traded, and might be active because he was trading on his own account rather than for a customer.

This relates too to the third issue: reports that the FBI is investigating for possible criminal violations. Seriously? I remember how the FBI covered itself in glory during the sting on the floors in Chicago in ’89. Not really. The press reports say that the the FBI is investigating whether HFT trades on “non-public information.”  Well, “non-public information” is not necessarily “inside information” which is illegal to trade on:  inside information typically relates to that obtained from someone with a fiduciary duty to shareholders. Indeed, ferreting out non-public information contributes to price discovery: raising the risk of prosecution for trading on information obtained through research or other means, but which is not obtained from someone with a fiduciary relationship to a company, is a dangerous slippery slope that could severely interfere with the operation of the market.

Moreover, it’s not so clear that order flow information is “non-public”.  No, not everyone has it: HFT has to expend resources to get it, but anybody could in theory do that. Anybody can make the investment necessary to ping a dark pool. Anybody can pay to get a faster data feed that allows them to get information that everyone has access to more quickly. Anybody can pay to get quicker access to the data, either through co-location, or the purchase of a private data feed. There is no theft or misappropriation involved. If firms trade on the basis of such information that can be obtained for a price that not everyone is willing to pay, and that is deemed illegal, how would trading on the basis of what’s on a Bloomberg terminal be any different?

Fourth, one reason for the development of dark pools, and the rules that dark pools establish, are to protect order flow information, or to make it less profitable to trade on that information. The heroes of Lewis’s book, the IEX team, specifically designed their system (which is now a dark pool, but which will transition to an ECN and then an exchange in the future) to protect institutional traders against opportunistic HFT. (Note: not all HFT is opportunistic, even if some is.)

That’s great. An example of how technological and institutional innovation can address an economic problem. I would emphasize again that this is not a new issue: just a new institutional response. Once upon a time institutional investors relied on block trading in the upstairs market to prevent information leakage and mitigate price impact. Now they use dark pools. And dark pools are competing to find technologies and rules and protocols that help institutional investors do the same thing.

I also find it very, very ironic that a dark pool is now the big hero in a trading morality tale. Just weeks ago, dark pools were criticized heavily in a Congressional hearing.  They are routinely demonized, especially by the exchanges. The Europeans have slapped very restrictive rules on them in an attempt to constrain the share of trading done in the dark. Which almost certainly will increase institutional trading costs: if institutions could trade more cheaply in the light, they would do so. It will also almost certainly make them more vulnerable to predatory HFT because they will be deprived of the (imperfect) protections that dark pools provide.

Fifth, and perhaps most importantly from a policy perspective, as I’ve written often, much of the problem with HFT in equities is directly the result of the fragmented market structure, which in turn is directly the result of RegNMS. For instance, latency arbitrage based on the slowness of the SIP results from the fact that there is a SIP, and there is a SIP because it is necessary to connect the multiple execution venues. The ability to use trades or quotes on one market to make inferences about institutional trades that might be directed to other markets is also a consequence of fragmentation. As I’ve discussed before, much of the proliferation of order types that Lewis (and others) argue advantage HFT is directly attributable to fragmentation, and rules relating to locked and crossed markets that are also a consequence of RegNMS-driven fragmentation.

Though HFT has spurred some controversy in futures markets, these controversies are quite different, and much less intense. This is due to the fact that many of the problematic features of HFT in equities are the direct consequence of RegNMS and the SEC’s decision (and Congress’s before that) to encourage competition between multiple execution venues.

And as I’ve also said repeatedly, these problems inhere in the nature of financial trading. You have to pick your poison. The old way of doing business, in which order flow was not socialized as in the aftermath of RegNMS, resulted in the domination of a single major execution venue (e.g., the NYSE). And for those with a limited historical memory, please know that these execution venues were owned by their members who adopted rules-rigged the game if you will-that benefited them. They profited accordingly.

Other news from today brings this point home. Goldman is about to sell its NYSE specialist unit, the former Spear, Leeds, which it bought for $6.5 billion (with a B) only 14 years ago.  It is selling it for $30 million (with an M).  That’s a 99.5 decline in market value, folks. Why was the price so high back in 2000? Because under the rules of the time, a monopoly specialist franchise on a near monopoly exchange generated substantial economic rents. Rents that came out of the pockets of investors, including small investors.  Electronic trading, and the socialization of order flow and the resultant competition between execution venues, ruthlessly destroyed those rents.

So it’s not like the markets have moved from a pre-electronic golden age into a technological dystopia where investors are the prey of computerized super-raptors. And although sorting out cause and effect is complicated, the decline in trading costs strongly suggests that the new system, for all its flaws, has been a boon for investors. Until regulators or legislators find the Goldilocks “just right” set of regulations that facilitates competition without the pernicious effects of fragmentation (and in many ways, “fragmentation” is just a synonym for “competition”), we have to choose one or the other. My view is that messy competition is usually preferable to tidy monopoly.

The catch phrase from Lewis’s book is that the markets are rigged. As I tweeted after the 60 Minutes segment on the book, by his definition of rigging, all markets have always been rigged. A group of specialized intermediaries has always exercised substantial influence over the rules and practices of the markets, and has earned rents at the expense of investors. And I daresay it would be foolish to believe this will ever change. My view is that the competition that prevails in current markets has dissipated a lot of those rents (although some of that dissipation has been inefficient, due to arms race effects).

In sum, there doesn’t appear to be a lot new in Lewis’s book. Moreover, the morality tale doesn’t capture the true complexity of the markets generally, or HFT specifically. It has certainly resulted in the release of a lot of heat, but I don’t see a lot of light. Which is kind of fitting for a book in which a dark pool is the hero.

 

Print Friendly

March 29, 2014

Margin Sharing: Dealer Legerdermain, or, That’s Capital, Not Collateral.

Concerns about the burdens of posting margins on OTC derivatives, especially posting by clients who tend to have directional positions, have led banks to propose “margin sharing.”  This is actually something of a scam.  I can understand the belief that margin requirements resulting from Frankendodd and Emir are burdensome, and need to be palliated, but margin sharing is being touted in an intellectually dishonest way.

The basic idea is that under DFA and Emir, both parties have to post margin.  Let’s say A and B trade, and both have to post $50mm in initial margins.  The level of margins is chosen so that the “defaulter (or loser) pays”: that is, under almost all circumstances, the losses on a defaulted position will be less than $50mm, and the defaulter’s collateral is sufficient to cover the loss.  Since either party may default, each needs to post the $50mm margin to cover losses in the event it turns out to be the loser.

But the advocates of margin sharing say this is wasteful, because only one party will default.  So the $50mm posted by the firm that doesn’t end up defaulting is superfluous.  Instead, just have the parties post $25mm each, leaving $50mm in total, which according to the advocates of margin sharing, is what is needed to cover the cost of default.  Problem solved!

But notice the sleight of hand here.  Under the loser pays model, all the $50mm comes out of the defaulter’s margin: the defaulter pays,  the non-defaulter receives all that it is owed, and makes no contribution from its own funds.  Under the margin sharing model, the defaulter may pay only a fraction of the loss, and the non-defaulter may use some of its $25mm contribution to make up the difference.   Both defaulter and non-defaulter pay.

This is fundamentally different from the loser pays model.  In essence, the shared margin is a combination of collateral and capital.  Collateral is meant to cover a defaulter’s market losses.  Capital permits the non-defaulter to absorb a counterparty credit loss.  Margin sharing essentially results in the holding of segregated capital dedicated to a particular counterparty.

I am not a fan of defaulter pays.  Or to put it more exactly, I am not a fan of mandated defaulter pays.  But it is better to confront the problems with the defaulter pays model head on, rather than try to circumvent it with financial doubletalk.

Counterparty credit issues are all about the mix between defaulter pays and non-defaulter pays.  Between collateral and capital.  DFA and Emir mandate a corner solution: defaulter pays.  It is highly debatable (but lamentably under-debated) whether this corner solution is best.  But it is better to have an open discussion of this issue, with a detailed comparison of the costs and benefits of the alternatives.  The margin sharing proposal blurs the distinctions, and therefore obfuscates rather than clarifies.

Call a spade a spade. Argue that there is a better mix of collateral and capital.  Argue that segregated counterparty-specific capital is appropriate.  Or not: the counterparty-specific, segregated nature of the capital in margin sharing seems for all the world to be a backhanded, sneaky way to undermine defaulter pays and move away from the corner solution.  Maybe counterparty-specific, segregated capital isn’t best: but maybe just a requirement based on a  firm’s aggregate counterparty exposures, and which doesn’t silo capital for each counterparty, is better.

Even if the end mix of capital and collateral that would result from collateral sharing  is better than the mandated solution, such ends achieved by sneaky means lead to trouble down the road.  It opens the door for further sneaky, ad hoc, and hence poorly understood, adjustments to the system down the line.  This increases the potential for rent seeking, and for the abuse of regulator discretion, because there is less accountability when policies are changed by stealth.  (Obamacare, anyone?)  Moreover, a series of ad hoc fixes to individual problems tends to lead to an incoherent system that needs reform down the road-and which creates its own systemic risks.  (Again: Obamacare, anyone?)  Furthermore, the information produced in an honest debate is a public good that can improve future policy.

In other words, a rethink on capital vs. collateral is a capital idea.  Let’s have that rethink openly and honestly, rather than pretending that things like margin sharing are consistent with the laws and regulations that mandate margins, when in fact they are fundamentally different.

Print Friendly

March 28, 2014

A Victory for Neanderthal Rights: Rusal Defeats the LME in Court. But the Neanderthal Is Still Endangered.

Filed under: Commodities,Derivatives,Economics,Politics,Russia — The Professor @ 8:01 pm

Late last year, the company of My Favorite Neanderthal, Oleg Deripaska’s Rusal, sued the London Metal exchange, claiming that the LME’s new rules on load out of aluminum violated Rusal’s human rights.  Yesterday, a judge in Manchester, UK gave Oleg a victory.

Although the judge found the human rights issue “an interesting and difficult question,” he did not rule on it.  Too bad!  That could have been entertaining.

But he did hand Rusal a victory, ruling that the LME’s process in adopting the new rule was flawed (bonus SWP quote).  As a result, the LME will not implement the rule, and has to go back to the drawing board.

Until a new rule is adopted, the bottleneck in the LME aluminum warehouses (notably Metro in Detroit) will remain stoppered.  Premiums will remain high and volatile.

And that’s the point.  By keeping the huge stocks of aluminum that accumulated in LME warehouses during the financial crisis off the market, the bottleneck keeps the prices of aluminum ex-warehouse artificially high.  This harms consumers, but enhances producers’ profits.  Which is precisely why Rusal sued.

But the victory may well by a Pyrrhic one.  For despite the fact that the warehouse bottleneck props up aluminum prices, and despite the fact that Rusal and other producers have reduced capacity, there is still a substantial supply imbalance that has weighed on prices: due to the bottleneck, prices are higher than they would be otherwise, but they are still quite low.  As a result, Rusal just posted a whopping $3.2 billion loss.

The company is heavily indebted, and the chronic losses imperil its ability to pay this debt.  The company has been frantically negotiating with its lenders, and says that if it does not get relief it will default.  Given that Deripaska has pledged shares as collateral for some borrowings, his status as a billionaire is in jeopardy.

Deripaska has been in such straits before.  He is in some ways the Donald Trump of Russia.  Putin bailed him out in 2008/2009.  Will he do it again?

Print Friendly

March 24, 2014

The Vertical (Silo) Bop: A Reprise

Filed under: Clearing,Commodities,Derivatives,Economics,Exchanges,Politics,Regulation — The Professor @ 7:26 pm

With all the Ukraine stuff, and Gunvor, and travel, some things got lost in my spindle.  Time to catch up.

One story is this article about a debate between NASDAQ OMX’s Robert Greifeld and CME Group’s Phupinder Gill.  The “vertical silo” in which an exchange owns both an execution venue and a clearinghouse was a matter of contention:

Nasdaq OMX Group Inc. CEO Robert Greifeld was asked yesterday about the vertical silo and whether it hurts investors.

“Monopolies are great if you own one,” he said during a panel discussion at the annual Futures Industry Association conference in Boca Raton, Florida, paraphrasing a quote he recalled hearing from an investor. His exchanges don’t use this system. “We have yet to find a customer who is in favor of the vertical model,” he said.

A very retro topic here on SWP.  I blogged about it quite a bit in 2006-2007.  Despite that, it’s still a misunderstood subject :-P

Presumably Greifeld believes that eliminating the vertical silo would open up competition in execution.  Yes, there would be competition, but the outcome would likely still be a monopoly in execution given the rules in futures markets.  Under current futures market regulations, there is nothing analogous to RegNMS which effectively socializes order flow by requiring each execution venue to direct orders to any other venue displaying a better price.  Under current futures market regulations, there is no linkage between different execution venues, and no obligation to direct orders to a better priced market.  This leads traders to submit orders to the venue that they expect will be offering the best price.   In this environment, liquidity attracts liquidity, and order flow tips exclusively to a single market.

So opening up clearing would still result in a monopoly execution venue.  There would be competition to be the monopoly, but at the end of the day only one market would remain standing.  Most likely the incumbent (CME in most cases, ICE in some others.)

It is precisely the fact that competition in clearing and execution would lead to bilateral monopolies that drives the formation of a vertical silo.  This eliminates double marginalization problems and reduces the transactions costs arising from opportunism and bargaining that are inherent to bilateral monopoly situations.

Breaking up the vertical silo primarily affects who earns the monopoly rent, and in what form. These outcomes depend on how the silo is broken up.

One alternative is to require the integrated exchange to offer access to its clearinghouse on non-discriminatory terms.  In this case, the one monopoly rent theorem implies that the clearing natural monopoly could extract the entire monopoly rent via its clearing fee.  Indeed, it would have an incentive to encourage competition in execution because this would maximize the derived demand for clearing, and hence maximize the monopoly price.  (This would also allow the integrated exchange to be compensated for its investment in the creation of new contracts, a point Gill emphasizes.  In my opinion, this is a minor consideration.)

Another alternative (which seems to be what Greifeld is advocating) would be to create a utility CCP (a la DTCC) that provides clearing services at cost.  In this case, the winning execution venue will capture the monopoly rent.

To a first approximation, market users would pay the same cost to trade under either alternative. And most likely, the dominant incumbent (CME) would capture the monopoly rent, either in execution fees, or clearing fees, or a combination of the two.  Crucially, however, total costs would arguably be higher with the utility clearer-monopoly execution venue setup, due to the transactions costs associated with coordination, bargaining, and opportunism between separate clearing and execution venues.  (Unfortunately, the phrase “transactions costs” does double duty in this context.  There are the costs that traders incur to transact, and the costs of operating and governing the trading and clearing venues.)

A third alternative would be to move to a structure like that in the US equity market, with a utility clearer and a RegNMS-type socialization of order flow.  Which would result in all the integration and fragmentation nightmares that are currently the subject of so much angst in the equity world.  Do we really want to inflict that on the futures markets?

As I’ve written ad nauseum over the years, there is no Nirvana in trading market structure.  You have a choice between inefficiencies arising from monopoly, or inefficiencies arising from fragmentation.   Not an easy choice, and I don’t know the right answer.

What I do know is that the vertical silo per se is not the problem.  The silo is an economizing response to the natural monopoly tendencies in clearing and execution (when there is no obligation to direct order flow to venues displaying better prices).  The sooner we get away from assuming differently (and the Boca debate is yet another example of our failure to do so) the sooner we will have realistic discussions of the real trade-offs in trading market structure.

Print Friendly

A Cunning Peasant With a Battleaxe, Fighting Frankendodd

Filed under: Commodities,Derivatives,Economics,Energy,Politics,Regulation,Russia — The Professor @ 10:59 am

That would be me.  At least according to the Google translate version of this profile of me in Neue Zürcher ZeitungIt’s a nice piece, and a fair one (in contrast to some other articles I won’t mention).  Except I am really a mild mannered guy.  Really!

Print Friendly

March 11, 2014

CCP Insurance for Armageddon Time

Matt Leising has an interesting story in Bloomberg about a consortium of insurance companies that will offer an insurance policy to clearinghouses that will address one of the most troublesome issues CCPs face: what to do when the waterfall runs dry.  That is, who bears any remaining losses after the defaulters’ margins, defaulters’ default fund contributions, CCP capital, and non-defaulters’ default fund contributions (including any top-up obligation) are all exhausted.

Proposals include variation margin haircuts, and initial margin haircuts.  Variation margin haircuts would essentially reduce the amount that those owed money on defaulted contracts would receive, thereby mutualizing default losses among “winners.”  Initial margin haircuts would share the losses among both winners and losers.

Given that the “winners” include many hedgers who would have suffered losses on other positions, I’ve always found variation margin haircutting problematic: it would reduce payoffs precisely in those states of the world in which the marginal utility of those payoffs is particularly high.  But that has been the industry’s preferred approach to this problem, though it has definitely not been universally popular, to say the least.  Distributive battles are never popularity contests.

This is where the insurance concept steps in.  The insurers will cover up to $6 to $10 billion in losses (across multiple CCPs) once all other elements of the default waterfall-including non-defaulters’ default fund contributions and CCP equity-are exhausted.  This will sharply limit, and eliminate in all but the most horrific scenarios, the necessity of mutualizing losses among non-clearing members via variation or initial margin haircutting.

Of course this sounds great in concept.  But one thing not discussed in the article is price.  How expensive will the coverage be?  Will CCPs find it sufficiently affordable to buy, or will they decide to haircut margins in some way instead because that is cheaper?

As I say in Matt’s article, although this proposal addresses one big headache regarding CCPs in extremis, it does not address another major concern: the wrong way risk inherent in CCPs.  Losses are likely to hit the default fund in crisis scenarios, which is precisely when the CCP member firms (banks mainly) are least able to take the hit.

It would have been truly interesting if insurers would have been willing to share losses with CCP members.  That would have mitigated the wrong way risk problem.  But the insurers were evidently not willing to do that.   This is likely because they are concerned about the moral hazard problems.  Members would have less incentive to mitigate risk if some of that risk is offloaded onto insurers who don’t influence CCP risk management and margining the way member firms do.

In sum, the insurers are taking on the risk in the extreme tail.  This of course raises the question of whether they are able to bear such risk, as it is likely to crystalize precisely during Armageddon Time. The consortium attempts to allay those concerns by pointing out that they have no derivatives positions (translation: We are not AIG!!!)  But there is still reason to ponder whether these companies will be solvent during the wrenching conditions that will exist when potentially multiple CCPs blow through their entire waterfalls.

Right now this is just a proposal and only the bare outlines have been disclosed.  It will be fascinating to see whether the concept actually sells, or whether CCPs will figure it is cheaper to offload the risk in the extreme tail on their customers rather than on insurance companies in exchange for a premium.

I’m also curious: will Buffett participate.  He’s the tail risk provider of last resort, and his (hypocritical) anti-derivatives rhetoric aside, this seems like it’s right down his alley.

Print Friendly

March 7, 2014

Clearing Risks, Kimchi Edition

Filed under: Clearing,Derivatives,Economics,Regulation — The Professor @ 5:32 pm

Major banks are having major concerns about the risks associated with CCPs in the aftermath of a failure of  a Korean brokerage firm that resulted in the mutualization of losses on the KRX.  The firm failed due to a fat-finger error (puts? calls? whatever!) and its margins were insufficient to cover its trading losses.

This experience is making CCP member firms re-evaluate the risks of CCPs, the risk controls implemented by CCPs, and the incentives of CCPs to control risk.   They realize that CCPs do not eliminate counterparty risk so much as redistribute it.  They are concerned about the incentives that CCPs have to manage those risks unless they have substantial exposure to them (“skin in the game”).

But here’s the thing.  This particular sequence of events is exactly what CCPs are best suited to handle: the insuring of idiosyncratic risks.  In this instance, the idiosyncratic risk was an random operational error at a single brokerage.

But that’s not why the G20 advocated clearing mandates.  The G20 went down the clearing path to mitigate systemic risk, which occurs when a shock hits many institutions simultaneously.  That is a very different beast indeed.

If banks lie awake at night worrying about the incentives of CCPs to mitigate idiosyncratic risks, they should never sleep ever if they think about systemic risk.  CCPs are ill-adapted to handle systemic risk precisely because risk pooling/diversification works for idiosyncratic risks, but not systematic ones.

Put differently: the failure of the Korean brokerage did not create a wrong way risk.  The failure did not cause a hit to the default fund when the non-defaulting members were under financial strain.  But defaults that occur during a crisis situation are laden with wrong way risk: losses are mutualized precisely when the members of the default fund are least able to bear them.

So if banks are concerned about the potential for a CCP failure in response to a bad realization of an idiosyncratic risk, think about the appropriate level of concern about the viability of CCPs in response to systematic risks.

A bite of kimchi can have a truly bracing effect. Would that regulators around the world take heed of the lessons of recent events from the land of kimchi.

Print Friendly

March 4, 2014

Derivatives Priorities in Bankrutpcy: A Hobson’s Choice?

And now for something completely different . . . finance.  (More Russia/Ukraine later.)

The Bank of England wants to put a stay on derivatives contracts entered into by an insolvent bank, thereby negating some of the priorities in bankruptcy accorded to derivatives counterparties:

he U.K. central bank wants lenders and the International Swaps and Derivatives Association Inc., an industry group, to agree to temporarily halt claims on banks that become insolvent and need intervention, Andrew Gracie, executive director of the BOE’s special resolution unit, said in an interview.

“The entry of a bank into resolution should not in itself be an event of default which allows counterparties to start accelerating contracts and triggering cross-defaults,” Gracie said. “You would get what you saw in Lehmans — huge amounts of uncertainty and an uncontrolled cascade of closeouts and cross defaults in the market.”

The priority status of derivatives trades is problematic at best: although it increases the fraction of the claims that derivatives counterparties receive from a bankrupt bank, this effect is primarily redistributive.  Other creditors receive less.  On the plus side, in the absence of priorities, counterparties could be locked into contracts entered into as hedges that are of uncertain value and which may not pay off for some time.  This complicates the task of replacing the hedge entered into with the bankrupt bank.   On balance, given the redistributive nature of priorities, and the fact that some of those who lose due to the fact that derivatives are privileged may be systemically important or may run, there is something to be said for this change.

But the redistributive nature of priorities makes me skeptical that this will really have that much effect on whether a bank gets into trouble in the first place.  In particular, since runs and liquidity crises are what really threatens the stability of banks, the change of priorities likely will mainly just affect who has the incentive to run on a troubled institution, without affecting all that much the overall probability of a run.

Under the current set of priorities, derivatives counterparties have an incentive to stick longer with a troubled bank, because in the event it becomes insolvent they have a priority claim.  But this makes other claimants on a failing bank more anxious to run, because they know that if the bank does fail derivatives counterparties will get a lion’s share of the remaining assets.  By reducing the advantages that the derivatives couunterparties have, they are more likely to run and pull value from the failing firm, whereas other claimants are less likely to run than under the current regime.  (Duffie’s book on the failure of an OTC derivatives dealer shows how derivatives counterparties can effectively run.)

In other words, in terms of affecting the vulnerability of a bank to a destabilizing run, the choice of priorities is something of a Hobson’s choice.  It affects mainly who has an incentive to run, rather than the likelihood of a run over all.

The BoE’s initiative seems to be symptomatic of something I’ve criticized quite a bit over the past several years: the tendency to view derivatives in isolation.  Triggering of cross-defaults and accelerating contracts is a problem because they can hasten the collapse of a shaky bank.  So fix that, and banks become more stable, right? But maybe not because it changes the behavior and decisions of others who can also bring down a financial institution. This is why I am skeptical that these sorts of changes will affect the stability of banks much one way or the other.  They might affect where a fire breaks out, but not the likelihood of a fire overall.

Print Friendly

January 28, 2014

Were the Biggest Banks Playing Brer Rabbit on the Clearing Mandate, and Was Gensler Brer Fox?

Filed under: Clearing,Derivatives,Economics,Exchanges,Financial crisis,Politics,Regulation — The Professor @ 10:25 pm

One interesting part of the Cœuré speech was his warning that the clearing business was coming to be dominated by a few large banks, that are members of multiple CCPs:

Moreover, it appears that for many banks, indirect access is their preferred way to get access to clearing services so as to comply with the clearing obligation. Client clearing seems thus to be dominated by a few large global intermediaries. A factor contributing to this concentration may be higher compliance burdens, where only the very largest of firms are capable of taking on cross-border activity. This concentration creates a higher degree of dependency on this small group of firms.

There are also concerns about client access to this limited number of firms offering client clearing services. For example, there is some evidence of clearing firms “cherry picking” clients, while other end-users are commercially unattractive customers and hence unable to access centrally cleared markets.

These are all developments that I believe the international regulatory community may wish to carefully monitor and act on as and when needed.

And wouldn’t you know.  He supports a longstanding SWP theme: That Frankendodd and EMIR and Basel create a huge regulatory burden that is essentially a fixed cost.  This increase in fixed costs raises scale economies, and this inevitably leads to an increase in concentration-and arguably a reduction in competition, in the provision of clearing services.

It now seems rather quaint that there was a debate over whether CCPs should be required to lower the minimum capital threshold for membership to $50 million.  That’s not the barrier to entry/participation.  It’s the regulatory overhead.

It’s actually an old story.  I remember a Maloney and McCormick paper from the 80s-hell, maybe even the late 70s-about the effects of the regulation of particulates in textile factories (if I recall).  The cost of complying with the regulation was essentially fixed, and the law essentially favored big firms and they profited from it.  It raised the costs of their smaller rivals, led to their exit, and resulted in higher prices and the big firms profited.  Similarly, I recall that  several papers by the late Peter Pashigian (a member of my PhD committee) found that environmental regulations favored large firms.

The Cœuré speech suggests this may be happening here: note the part about client access to a “limited number of clearing firms.”

And it’s not just pipsqueaks that are exiting the clearing business.  The largest custodian bank-BNY Mellon-is closing up shop:

More banks are expected to follow BNY Mellon’s lead and pull out of client clearing, as flows have concentrated among half a dozen major players following the roll-out of mandatory clearing in the US last year.

The decision of the world’s largest custodian bank to shutter its US clearing unit was the first real indication of how much institutions are struggling with spiralling costs and complexity associated with clearing clients’ swaps trades – a business once viewed as the cash cow of the new regulatory regime.

You might recall that BNY Mellon was one of the firms that complained loudest about the high capital requirements of becoming a member of ICE Trust and LCH.  Again: it’s not the CCP capital requirements that are the issue.  It’s the other substantial cost of providing client clearing services, and regulatory/compliance costs are a big part of that.

Ah yes, another Gensler argument down in flames.  Remember how he constantly told us-lectured us, actually-that Frankendodd would dramatically increase competition in derivatives?  That it would break the dealer hammerlock on the OTC market?

Remember how I called bull?

Whose call looks better now?  Sometimes I wonder if JP Morgan, Goldman, Barclays, etc., weren’t playing the role of Brer Rabbit, and Gensler was playing Brer Fox. For he done trown dem into dat brer patch, sure ’nuff.

Though it must be said that this was not Gensler’s biggest contribution to reducing competition in derivatives markets in the name of increasing competition.  His insane extraterritoriality decisions have fragmented the OTC derivatives markets, with Europeans reluctant to trade with Americans.  The fragmentation of the markets reduces counterparty choice in both Europe and the US, thereby limiting competition.

This is not just a matter of competition.  There are systemic issues involved as well, and these also make a mockery of the Frankendodd evangelists.  They assured the world that Frankendodd and clearing mandates would reduce reliance on a few large, highly interconnected intermediaries in the derivatives markets. That is proving to be another lie, on the order of “if you like your health plan, you can keep your health plan.”  The old system relied on a baker’s dozen or so large, highly interconnected dealers.  The new system will rely on probably a handful or two large, highly interconnected clearing firms.

The most important elements in the clearing system are a small number of major banks that are clearing members at several global CCPs.  The failure or financial distress of any one of these would wreak havoc in the derivatives markets and the clearing mechanism, just as the failure of a major dealer firm would shake the bilateral OTC markets to the core.

Just think about one issue: portability.  If there are only a small number of huge clearing firms, is it really feasible to port the clients of one of them to the few remaining CMs, especially during times of market stress when these might not have the capital to take on a large number of new clients?

What happens then?

I don’t want to think about it: there’s only so much I can handle.

But Cœuré assures us the regulators are on top of it.  Or at least they are thinking about getting on top of it: “the international regulatory community may wish to carefully monitor and act on as and when needed.”  ”May wish to act as needed.”  Sure. Take your time! What’s the hurry? What’s the worry?

I won’t dwell on the  irony of those who advocated the measures that got us into this situation pulling their chins and telling us this might be a matter of concern, especially since they were deaf to warnings made back when they could have avoided leading us down the path that led us to this oh-so-predictable destination.

Print Friendly

Next Page »

Powered by WordPress