Streetwise Professor

August 26, 2014

Merkel to Ukraine: Here’s Your Hat. What’s Your Hurry?

Filed under: Economics,Politics,Regulation,Russia — The Professor @ 8:50 pm

To compound Merkel’s obsequiousness to Putin, and her pushing Ukraine into his embrace, she broadly hinted that Ukraine should join Putin’s pet project, The Eurasian Union. Since Putin has made it clear that membership in the Eurasian Union and the Real EU are mutually exclusive, this is tantamount to turning Europe’s back on Ukraine and leaving it to Putin’s tender mercies.

Merkel’s remarks make it clear that her primary motive for abandoning Ukraine to Putin is to keep good relations with Russia, and to avoid riling Vlad:

“And if Ukraine says we are going to the Eurasian Union now, the European Union would never make a big conflict out of it, but would insist on a voluntary decision,” Merkel added.

“I want to find a way, as many others do, which does not damage Russia. We [Germany] want to have good trade relations with Russia as well. We want reasonable relations with Russia. We are depending on one another and there are so many other conflicts in the world where we should work together, so I hope we can make progress”

Nauseating. Like my grandfather said about a hostess trying to hint to  a guest who had overstayed his welcome that he leave: “Here’s your hat, Bob. Why are you in such a hurry to leave?”

For his part, during the Eurasian Union summit in Minsk, Putin made it clear that Ukraine had to choose between one EU or the other, and if it chose wrong, Russia would punish it. This is the fate that Merkel is willing to consign Ukraine to, so that Siemens can continue to sell to Russia, and Adidas can provide all the track suits that the gopniks desire:

In his public comments, Mr. Putin highlighted the dangers he said Russia faces if Ukraine pursues closer ties to the West. Since the onset of the crisis, Mr. Putin has accused the West of meddling in Ukraine’s internal affairs and trying to spoil its relations with Moscow.

Mr. Putin said that a trade agreement between Kiev and Europe will flood the Ukrainian market with European goods, which may then find their way into Russia. “In this situation Russia cannot stand idle. And we will be prompted…to take retaliatory measures, to protect our market,” Mr. Putin said.

The interesting thing about this is just what it betrays about what Putin thinks about Russian competitiveness. Yes, Russia is so great. Russia is so strong. Russia is a beacon to the world. But it can’t produce things its own people and businesses want.

Note the phrase: “take retaliatory measures, to protect our market.” Remind me again: didn’t Russia join the WTO? Apparently Putin is unclear on the concept.

Further note whom Putin is protecting Russian markets against: Europe. Apparently Angela is unclear on some concepts too.

Putin and many (most?) other Russians inveigh about Russophobia. When he says things like that, it’s hard to think of a bigger Russophobe than Putin. He evidently does believe that Russians are inferior, and in need of protection.

Print Friendly

August 17, 2014

This Never Happens, Right?: Regulators Push a Flawed Solution

Filed under: Clearing,Derivatives,Economics,Politics,Regulation — The Professor @ 6:06 pm

Regulators are pushing ISDA and derivatives market participants really hard to incorporate a stay on derivatives trades of failing SIFIs. As I wrote a couple of weeks ago, this is a problem if bankruptcy law involving derivatives is not changed because the prospect of having contracts stayed, and thus the right of termination abridged, could lead counterparties to run from a weak counterparty before it actually defaults. This is possible if derivatives remain immune from fraudulent conveyance or preference claims.

Silla Brush, who co-wrote an article about the issue in Bloomberg, asked me a good question via Twitter: why should derivatives counterparties run, if they are confident that their positions with the failing bank will be transferred to a solvent one during the resolution process?

I didn’t think of the answer on the fly, but upon reflection it’s pretty clear. If counterparties were so confident that such a transfer will occur, a stay would be unnecessary: they would not terminate their contracts, but would breathe a sigh of relief and wait patiently while the transfer takes place.

If regulators think a stay is necessary, it is because they fear that counterparties would prefer to terminate their contracts than await their fate in a resolution.

So a stay is either a superfluous addition to the resolution process, or imposes costs on derivatives counterparties who lack confidence in that process.

If this is true, the logic I laid out before goes through. If you impose a stay, if market participants would prefer to terminate rather than live with the outcome of a resolution process, they have an incentive to run a failing bank, and find a way to get out of their derivatives positions and recover their collateral.

This can actually precipitate the failure of a weak bank.

I say again: constraining the actions of derivatives counterparties at the time of default can have perverse effects if their actions prior to default are not constrained.

This means that you need to fix bankruptcy rules regarding derivatives in a holistic way. And this is precisely the problem. Despairing at their ability to achieve a coherent, systematic fix of bankruptcy law in the present political environment, regulators are trying to implement piecemeal workarounds. But piecemeal workarounds create more problems than they correct.

But of course, the regulators pressing for this are pretty much the same people who rushed clearing mandates and other aspects of Frankendodd into effect without thinking through how things would work in practice.

Print Friendly

Nationalize the Clearinghouses?

Filed under: Clearing,Commodities,Derivatives,Economics,Politics,Regulation — The Professor @ 3:48 pm

Stephen Lubben has garnered a lot of attention with his recent paper “Nationalize the Clearinghouses.” Don’t get nervous, CME, ICE, LCH: he doesn’t mean now, but in the event of your failure.

A few brief comments.

First, I agree-obviously, since I’ve been saying this going back to the 90s-that the failure of a big CCP would be a catastrophic systemic event, and that a failure is a set of positive measure. Thus, planning for this contingency is essential. Second, I further agree that establishing a procedure that lays out in advance what will be done upon the failure of a CCP is vital, and that leaving things to be handled in an ad hoc way at the time of failure is a recipe for disaster (in large part because how market participants would respond to the uncertainty when a CCP teeters on the brink). Third, it is evident that CCPs do not fit into the recovery and resolution schemes established for banks under Frankendodd and EMIR. CCPs are very different from banks, and a recovery or resolution mechanism designed for banks would be a bad, bad fit for clearers.

Given all this, temporary nationalization, with a pre-established procedure for subsequent privatization, is reasonable. This would ensure continuity of operations of a CCP, which is essential.

It’s important not to exaggerate the benefits of this, however. Stephen states: nationalization “should provide stakeholders in the clearinghouses with strong incentives to oversee the clearinghouse’s management, and avoid such a fate.” I don’t think that the ex ante efficiency effects of nationalization will be that large. After all, nationalization would occur only after the equity of the CCP (which is pretty small to begin with) is wiped out, and the default fund plus additional assessments have been blown through. Shooting/nationalizing a corpse doesn’t have much of an incentive effect on the living ex ante.

Stephen recommends that upon nationalization that CCP memberships be canceled. This is superfluous, given the setup of CCPs. Many CCPs require members to meet an assessment call up to the amount of the original contribution to the default fund. Once they have met that call, they can resign from the CCP: that’s when the CCP gives up the ghost. Thus, a CCP fails when members exercise their option to check out. There are no memberships to cancel in a failed CCP.

Lubben recommends that there be an “expectation of member participation in the recapitalization of the clearinghouse, once that becomes systemically viable.” In effect, this involves the creation of a near unlimited liability regime for CCP members. The existing regime (which involves assessment rights, typically capped at the original default fund contribution amount) goes beyond traditional limited liability, but not all the way to a Lloyds of London-like unlimited liability regime. Telling members that they will be “expected” to recapitalize a CCP (which has very Don Corleone-esque overtones) essentially means that membership in a CCP requires a bank/FCM to undertake an unlimited exposure, and to provide capital at times they are likely to be very stressed.

This is problematic in the event, and ex ante.

Stephen qualifies the recapitalization obligation (excuse me, “expectation”) with “once that becomes systemically viable.” Well, that could be a helluva long time, given that the failure of a CCP will be triggered by the failure of 2 or more systemically important financial institutions. (And let’s not forget that given the fact that FCMs are members of multiple clearinghouses, multiple simultaneous failures of CCPs is a very real possibility: indeed, there is a huge correlation risk here, meaning that surviving members are likely to be expected to re-capitalize multiple CCPs.) Thus, even if the government keeps a CCP from failing via nationalization, the entities that it expects to recapitalize the seized clearinghouse will will almost certainly be in dire straits themselves at this juncture. A realistic nationalization plan must therefore recognize that the government will be bearing counterparty risk for the CCP’s derivatives trades for some considerable period of time. Nationalization is not free.

Ex ante, two problems arise. First, the prospect of unlimited liability will make banks very reluctant to become members of CCPs. Nationalization plus a recapitalization obligation is the wrong-way risk from hell: banks will be expected to pony up capital precisely when they are in desperate straits. My friend Blivy jokingly asked whether there will soon be more CCPs than clearing firms. An “expectation” of recapitalizing a nationalized CCP is likely to make that a reality, rather than a joke.

Second, the nationalization scheme creates a moral hazard. Users of CCPs (i.e., those trading cleared derivatives) will figure that they will be made whole in the event of a failure: the government and eventually the (coerced) banks will make the creditors of the CCP whole. They thus have less incentive to monitor a CCP or the clearing members.

Thus, other issues have to be grappled with. Specifically, should there be “bail-ins” of the creditors of a failed CCP, most notably through variation margin haircutting? Or should there be initial margin haircutting, which would intensify the incentives to monitor (as well as spread the default risk more broadly, and not force it disproportionately on those receiving VM payments, who are  likely to be hedgers) ? Hard questions, but ones that need to be addressed.

It is good to see that serious people like Stephen are now giving serious consideration to this issue. It is unfortunate that the people responsible for mandating clearing didn’t give these issues serious consideration when rushing to pass Frankendodd and EMIR.

Again: legislate in haste, repent at leisure.

 

Print Friendly

July 30, 2014

ISDA Should Stay Its Hand, Not Derivatives In Bankruptcy

Filed under: Derivatives,Economics,Financial crisis,Regulation — The Professor @ 8:39 pm

I’ve been meaning to write about how derivatives are treated in bankruptcy, but it’s a big topic and I haven’t been able to get my hands around it. But this article from Bloomberg merits some comment, because it suggests that market participants, led by ISDA, are moving to a partial change that could make things worse if the bankruptcy code treatment of derivatives remains the same.

Derivatives benefit from a variety of “safe harbors” in bankruptcy. They are treated very differently than other financial contracts. If a firm goes bankrupt, its derivatives counterparties can offset winning against losing trades, and determine a net amount. In contrast, with normal debts, such offsets are not permitted, and the bankruptcy trustee can “cherry pick” by not performing on losing contracts and insisting on performance on winning ones. Derivatives counterparties can immediately access the collateral posted by a bankrupt counterparty. Other secured debtors do not have immediate access to collateral. Derivatives counterparties are not subject to preference or fraudulent conveyance rules: the bankruptcy trustee can claw back cash taken out of a firm up to 90 days prior to its bankruptcy, except in the case of cash taken by derivatives (and repo) counterparties. Derivatives counterparties can immediately terminate their trades upon the bankruptcy of a trading partner, collect collateral to cover the bankrupt’s obligations, and become an unsecured creditor on the remainder.

It is this ability to terminate and grab collateral that proved so devastating to Lehman in 2008. Cash is a vital asset for a financial firm, and any chance Lehman had to survive or be reorganized disappeared with the cash that went out the door when derivatives were terminated and collateral seized. It is this problem that ISDA is trying to fix, by writing a temporary stay on the ability of derivatives counterparties to terminate derivatives contracts of a failed firm into standard derivatives contract terms.

That sounds wonderful, until you go back to previous steps in the game tree. The new contract term affects the calculations of derivatives counterparties before a tottering firm actually declares bankruptcy. Indeed, as long as preference/fraudulent conveyance safe harbor remains, the new rule actually increases the incentives of the derivatives counterparties to run on a financially distressed, but not yet bankrupt, firm. This increases the likelihood that a distressed firm actually fails.

The logic is this. If the counterparties keep their positions open until the firm is bankrupt, the stay prevents them from terminating their positions, and they are at the mercy of the resolution authority. They are at risk of not being able to get their collateral immediately. However, if they use some of the methods that Duffie describes in How Big Banks Fail, derivatives counterparties can reduce their exposures to the distressed firm before it declares bankruptcy, and crucially, get their hands on their collateral without having to worry about a stay, or having the money clawed back as a preference or fraudulent conveyance.

Thus, staying derivatives in a bankrupt firm, but retaining the safe harbor from preference/fraudulent conveyance claims, gives derivatives counterparties an incentive to run earlier. Under the contracts with the stay, they are in a weaker position if they wait until a formal insolvency than they are under the current way of doing business. They therefore are more likely to run early if derivatives are stayed.

This means that this unbalanced change in the terms of derivatives contracts actually increases the likelihood that a financial firm with a large derivatives book will implode due to a run by its counterparties. The stay may make things better conditional on being in bankruptcy, but increase the likelihood that a firm will default. This is almost certainly a bad trade-off. We want rules that reduce the likelihood of runs. This combination of contract terms and bankruptcy rules increases the likelihood of runs.

This illustrates the dangers of piecemeal changes to complex financial systems. Strengthening one part can make the entire system more vulnerable to failure. Changing one part effects how the other parts work, and not always for the better.

Rather than fixing single parts one at a time, it is essential to recognize the interdependencies between the pieces. The bankruptcy rules have a lot of interdependencies. Indeed, the rules on preferences/fraudulent conveyance are necessary precisely because of the perverse incentives that would exist prior to bankruptcy if claims are stayed in bankruptcy, but creditors can get their money out of a firm before bankruptcy. Stays alone can make things worse if the behavior of creditors prior to a formal filing is not constrained. All the pieces have to fit together.

The Bloomberg article notes that the international nature of the derivatives business complicates the job of devising a comprehensive treatment of derivatives in bankruptcy: harmonizing bankruptcy laws across many countries is a nightmare. But the inability to change the entire set of derivatives-related bankruptcy rules doesn’t mean that fixing one aspect of them by a contractual change makes things better. It can actually make things worse.  It is highly likely that imposing a stay in bankruptcy, but leaving the rest of the safe harbors intact, will do exactly that.

ISDA appears to want to address in the worst way the problems that derivatives can cause in bankruptcy. And unfortunately, it just might succeed. ISDA should stay its hand, and not derivatives in bankruptcy, unless other parts of the bankruptcy code are adjusted in response to the new contract term.

Print Friendly

July 29, 2014

The FUD Factor At Work

Filed under: Commodities,Economics,Energy,Politics,Regulation,Russia — The Professor @ 9:35 am

Going back to the original round of sanctions, I have been arguing that the terms of US sanctions have been left deliberately vague in order to make  banks and investors very cautious about dealing with sanctioned firms. Spreading fear, uncertainty, and doubt-FUD-leverages the effect of sanctions.

When I read the last round of sanctions, I had many questions, and hence many doubts about actually how far the sanctions would reach. I was not alone. Professionals-lawyers at banks and Wall Street law firms-are also uncertain:

But compliance officers at some U.S. banks and broker-dealers say the sanctions, issued by Treasury’s Office of Foreign Assets Control (OFAC), are not clear enough. That has left financial institutions guessing, in certain instances, at how to comply. They worry they are vulnerable to punitive action by U.S. regulators.

Fear, uncertainty, and doubt, all in one paragraph. The fear part is particularly interesting, and quite real, especially in the aftermath of the truly punitive action by U.S. regulators in the BNP-Paribas case.

OFAC-The Office of Foreign Asset Control, which is in charge of overseeing the sanctions-is in no hurry to clarify matters:

Another senior compliance officer at a major U.S. bank said bankers “are frustrated that OFAC is not providing more guidance.”

The day after the sanctions were issued, OFAC held a conference call with hundreds of financial services industry professionals in an effort to answer concerns. Although some issues were cleared up, others were left undecided, said two sources who were on the call.

Dear Mr. Senior Compliance Officer: that’s on purpose. Believe me.

A new round of sanctions may be imminent. I am hoping to be proven wrong in my forecasts, because reports are that the Europeans are going to do something serious. Add serious doubts to serious action, and American and European banks won’t touch most Russian banks or major companies with a 10 foot pole while wearing a hazmat suit. That will cause some major economic problems for Putin and Russia. Not 1998-magnitude problems, but maybe something bordering on 2008 problems, although a $100+ oil price will help contain the damage, despite the added difficulties that sanctions will create for the Russians to cash the checks for that oil.

Then it will be Vlad’s move. What that move will be, I do not know.

Print Friendly

July 25, 2014

Benchmark Blues

Pricing benchmarks have been one of the casualties of the financial crisis. Not because the benchmarks-like Libor, Platts’ Brent window, ISDA Fix, the Reuters FX window or the gold fix-contributed in an material way to the crisis. Instead, the post-crisis scrutiny of the financial sector turned over a lot of rocks, and among the vermin crawling underneath were abuses of benchmarks.

Every major benchmark has fallen under deep suspicion, and has been the subject of regulatory action or class action lawsuits. Generalizations are difficult because every benchmark has its own problems. It is sort of like what Tolstoy said about unhappy families: every flawed benchmark is flawed in its own way. Some, like Libor, are vulnerable to abuse because they are constructed from the estimates/reports of interested parties. Others, like the precious metals fixes, are problematic due to a lack of transparency and limited participation. Declining production and large parcel sizes bedevil Brent.

But some basic conclusions can be drawn.

First-and this should have been apparent in the immediate aftermath of the natural gas price reporting scandals of the early-2000s-benchmarks based on the reports of self-interested parties, rather than actual transactions, are fundamentally flawed. In my energy derivatives class I tell the story of AEP, which the government discovered kept a file called “Bogus IFERC.xls” (IFERC being an abbreviation for Inside Ferc, the main price reporting publication for gas and electricity) that included thousands of fake transactions that the utility reported to Platts.

Second, and somewhat depressingly, although benchmarks based on actual transactions are preferable to those based on reports, in many markets the number of transactions is small. Even if transactors do not attempt to manipulate, the limited number of transactions tends to inject some noise into the benchmark value. What’s more, benchmarks based on a small number of transactions can be influenced by a single trade or a small number of trades, thereby creating the potential for manipulation.

I refer to this as the bricks without straw problem. Just like the Jews in Egypt were confounded by Pharoh’s command to make bricks without straw, modern market participants are stymied in their attempts to create benchmarks without trades. This is a major problem in some big markets, notably Libor (where there are few interbank unsecured loans) and Brent (where large parcel sizes and declining Brent production mean that there are relatively few trades: Platts has attempted to address this problem by expanding the eligible cargoes to include Ekofisk, Oseberg, and Forties, and some baroque adjustments based on CFD and spread trades and monthly forward trades). This problem is not amenable to an easy fix.

Third, and perhaps even more depressingly, even transaction-based benchmarks derived from markets with a decent amount of trading activity are vulnerable to manipulation, and the incentive to manipulate is strong. Some changes can be made to mitigate these problems, but they can’t be eliminated through benchmark design alone. Some deterrence mechanism is necessary.

The precious metals fixes provide a good example of this. The silver and gold fixes have historically been based on transactions prices from an auction that Walras would recognize. But participation was limited, and some participants had the market power and the incentive to use it, and have evidently pushed prices to benefit related positions. For instance, in the recent allegation against Barclays, the bank could trade in sufficient volume to move the fix price sufficiently to benefit related positions in digital options. When there is a large enough amount of derivatives positions with payoffs tied to a benchmark, someone has the incentive to manipulate that benchmark, and many have the market power to carry out those manipulations.

The problems with the precious metals fixes have led to their redesign: a new silver fix method has been established and will go into effect next month, and the gold fix will be modified, probably along similar lines. The silver fix will replace the old telephone auction that operated via a few members trading on their own account and representing customer orders with a more transparent electronic auction operated by CME and Reuters. This will address some of the problems with the old fix. In particular, it will reduce the information advantage that the fixing dealers had that allowed them to trade profitably on other markets (e.g.,. gold futures and OTC forwards and options) based on the order flow information they could observe during the auction. Now everyone will be able to observe the auction via a screen, and will be less vulnerable to being picked off in other markets. It is unlikely, however, that the new mechanism will mitigate the market power problem. Big trades will move markets in the new auction, and firms with positions that have payoffs that depend on the auction price may have an incentive to make those big trades to advantage those positions.

Along these lines, it is important to note that many liquid and deep futures markets have been plagued by “bang the close” problems. For instance, Amaranth traded large volumes in the settlement period of expiring natural gas futures during three months of 2006 in order to move prices in ways that benefited its OTC swaps positions. The CFTC recently settled with the trading firm Optiver that allegedly banged the close in crude, gasoline, and heating oil in March, 2007. These are all liquid and deep markets, but are still vulnerable to “bullying” (as one Optiver trader characterized it) by large traders.

The incentives to cause an artificial price for any major benchmark will always exist, because one of the main purposes of benchmarks is to provide a mechanisms for determining cash flows for derivatives. The benchmark-derivatives market situation resembles an inverted pyramid, with large amounts cash flows from derivatives trades resting on a relatively small number of spot transactions used to set the benchmark value.

One way to try to ameliorate this problem is to expand the number of transactions at the point of the pyramid by expanding the window of time over which transactions are collected for the purpose of calculating the benchmark value: this has been suggested for the Platts Brent market, and for the FX fix. A couple of remarks. First, although this would tend to mitigate market power, it may not be sufficient to eliminate the problem: Amaranth manipulated a price that was based on a VWAP over a relatively long 30 minute interval. In contrast, in the Moore case (a manipulation case involving platinum and palladium brought by the CFTC) and Optiver, the windows were only two minutes long. Second, there are some disadvantages of widening the window. Some market participants prefer a benchmark that reflects a snapshot of the market at a point in time, rather than an average over a period of time. This is why Platts vociferously resists calls to extend the duration of its pricing window. There is a tradeoff in sources of noise. A short window is more affected by the larger sampling error inherent in the smaller number of transactions that occurs in a shorter interval, and the noise resulting from greater susceptibility to manipulation when a benchmark is based on smaller number of trades. However, an average taken over a time interval is a noisy estimate of the price at any point of time during that interval due to the random fluctuations in the “true” price driven by information flow. I’ve done some numerical experiments, and either the sampling error/manipulation noise has to be pretty large, or the volatility of the “true” price must be pretty low for it to be desirable to move to a longer interval.

Other suggestions include encouraging diversity in benchmarks. The other FSB-the Financial Stability Board-recommends this. Darrel Duffie and Jeremy Stein lay out the case here (which is a lot easier read than the 750+ pages of the longer FSB report).

Color me skeptical. Duffie and Stein recognize that the market has a tendency to concentrate on a single benchmark. It is easier to get into and out of positions in a contract which is similar to what everyone else is trading. This leads to what Duffie and Stein call “the agglomeration effect,” which I would refer to as a “tipping” effect: the market tends to tip to a single benchmark. This is what happened in Libor. Diversity is therefore unlikely in equilibrium, and the benchmark that survives is likely to be susceptible to either manipulation, or the bricks without straw problem.

Of course not all potential benchmarks are equally susceptible. So it would be good if market participants coordinated on the best of the possible alternatives. As Duffie and Stein note, there is no guarantee that this will be the case. This brings to mind the as yet unresolved debate over standard setting generally, in which some argue that the market’s choice of VHS over the allegedly superior Betamax technology, or the dominance of QWERTY over the purportedly better Dvorak keyboard (or Word vs. Word Perfect) demonstrate that the selection of a standard by a market process routinely results in a suboptimal outcome, but where others (notably Stan Lebowitz and Stephen Margolis) argue that  these stories of market failure are fairy tales that do not comport with the actual histories. So the relevance of the “bad standard (benchmark) market failure” is very much an open question.

Darrel and Jeremy suggest that a wise government can make things better:

This is where national policy makers come in. By speaking publicly about the advantages of reform — or, if necessary, by using their power to regulate — they can guide markets in the desired direction. In financial benchmarks as in tap water, markets might not reach the best solution on their own.

Putting aside whether government regulators are indeed so wise in their judgments, there is  the issue of how “better” is measured. Put differently: governments may desire a different direction than market participants.

Take one of the suggestions that Duffie and Stein raise as an alternative to Libor: short term Treasuries. It is almost certainly true that there is more straw in the Treasury markets than in any other rates market. Thus, a Treasury bill-based benchmark is likely to be less susceptible to manipulation than any other market. (Though not immune altogether, as the Pimco episode in June ’05 10 Year T-notes, the squeezes in the long bond in the mid-to-late-80s, the Salomon 2 year squeeze in 92, and the chronic specialness in some Treasury issues prove.)

But that’s not of much help if the non-manipulated benchmark is not representative of the rates that market participants want to hedge. Indeed, when swap markets started in the mid-80s, many contracts used Treasury rates to set the floating leg. But the basis between Treasury rates, and the rates at which banks borrowed and lent, was fairly variable. So a Treasury-based swap contract had more basis risk than Libor-based contracts. This is precisely why the market moved to Libor, and when the tipping process was done, Libor was the dominant benchmark not just for derivatives but floating rate loans, mortgages, etc.

Thus, there may be a trade-off between basis risk and susceptibility to manipulation (or to noise arising from sampling error due to a small number of transactions or averaging over a wide time window). Manipulation can lead to basis risk, but it can be smaller than the basis risk arising from a quality mismatch (e.g., a credit risk mismatch between default risk-free Treasury rates and a defaultable rate that private borrowers pay). I would wager that regulators would prefer a standard that is less subject to manipulation, even if it has more basis risk, because they don’t internalize the costs associated with basis risk. Market participants may have a very different opinion. Therefore, the “desired direction” may depend very much on whom you ask.

Putting all this together, I conclude we live in a fallen world. There is no benchmark Eden. Benchmark problems are likely to be chronic for the foreseeable future. And beyond. Some improvements are definitely possible, but benchmarks will always be subject to abuse. Their very source of utility-that they are a visible price that can be used to determine payoffs on vast sums of other contracts-always provides a temptation to manipulate.

Moving to transactions-based mechanisms eliminates outright lying as a manipulation strategy, but it does not eliminate the the potential for market power abuses. The benchmarks that would be least vulnerable to market power abuses are not necessarily the ones that best reflect the exposures that market participants face.

Thus, we cannot depend on benchmark design alone to address manipulation problems. The means, motive, and opportunity to manipulate even transactions-based benchmarks will endure. This means that reducing the frequency of manipulation requires some sort of deterrence mechanism, either through government action (as in the Libor, Optiver, Moore, and Amaranth cases) or private litigation (examples of which include all the aforementioned cases, plus some more, like Brent).  It will not be possible to “solve” the benchmark problems by designing better mechanisms, then riding off into the sunset like the Lone Ranger. Our work here will never be done, Kimo Sabe.*

* Stream of consciousness/biographical detail of the day. The phrase “Kimo Sabe” was immortalized by Jay Silverheels-Tonto in the original Lone Ranger TV series. My GGGGF, Abel Sherman, was slain and scalped by an Indian warrior named Silverheels during the Indian War in Ohio in 1794. Silverheels made the mistake of bragging about his feat to a group of lumbermen, who just happened to include Abel’s son. Silverheels was found dead on a trail in the woods the next day, shot through the heart. Abel (a Revolutionary War vet) was reputedly the last white man slain by Indians in Washington County, OH. His tombstone is on display in the Campus Martius museum in Marietta. The carving on the headstone is very un-PC. It reads:

Here lyes the body of Abel Sherman who fell by the hand of the Savage on the 15th of August 1794, and in the 50th year of  his age.

Here’s a picture of it:

OLYMPUS DIGITAL CAMERA

The stream by which Abel was killed is still known as Dead Run, or Dead Man’s Run.

Print Friendly

July 21, 2014

Doing Due Diligence in the Dark

Filed under: Exchanges,HFT,Regulation — The Professor @ 8:39 pm

Scott Patterson, WSJ reporter and the author of Dark Pools, has a piece in today’s journal about the Barclays LX story. He finds, lo and behold, that several users of the pool had determined that they were getting poor executions:

Trading firms and employees raised concerns about high-speed traders at Barclays PLC’s dark pool months before the New York attorney general alleged in June that the firm lied to clients about the extent of predatory trading activity on the electronic trading venue, according to people familiar with the firms.

Some big trading outfits noticed their orders weren’t getting the best treatment on the dark pool, said people familiar with the trading. The firms began to grow concerned that the poor results resulted from high-frequency trading, the people said.

In response, at least two firms—RBC Capital Markets and T. Rowe Price Group Inc —boosted the minimum number of shares they would trade on the dark pool, letting them dodge high-speed traders, who often trade in small chunks of 100 or 200 shares, the people said.

This relates directly to a point that I made in my post on the Barclays story. Trading is an experience good. Dark pool customers can evaluate the quality of their executions. If a pool is not screening out opportunistic traders, execution costs will be high relative to other venues who do a better job of screening, and users who monitor their execution costs will detect this. Regardless of what a dark pool operator says about what it is doing, the proof of the pudding is in the trading, as it were.

The Patterson article shows that at least some buy side firms do the necessary analysis, and can detect a pool that does not exclude toxic flows.

This long FT piece relies extensively on quotes from Hisander Misra, one of the founders of Chi-X, to argue that many fund managers have been ignorant of the quality of executions they get on dark pools. The article talked to two anonymous fund managers who say they don’t know how dark pools work.

The stated implication here is that regulation is needed to protect the buy side from unscrupulous pool operators.

A couple of comments. First, not knowing how a pool works doesn’t really matter. Measures of execution quality are what matter, and these can be measured. I don’t know all of the technical details of the operation of my car or the computer I am using, but I can evaluate their performances, and that’s what matters.

Second, this is really a cost-benefit issue. Monitoring of performance is costly. But so is regulation and litigation. Given that market participants have the biggest stake in measuring pool performance properly, and can develop more sophisticated metrics, there are strong arguments in favor of relying on monitoring.  Regulators can, perhaps, see whether a dark pool does what it advertises it will do, but this is often irrelevant because it does not necessarily correspond closely to pool execution costs, which is what really matters.

Interestingly, one of the things that got a major dark pool (Liquidnet) in trouble was that it shared information about the identities of existing clients with prospective clients. This presents interesting issues. Sharing such information could economize on monitoring costs. If a a big firm (like a T. Rowe) trades in a pool, this can signal to other potential users that the pool does a good job of screening out the opportunistic. This allows them to free ride off the monitoring efforts of the big firm, which economizes on monitoring costs.

Another illustration of how things are never simple and straightforward when analyzing market structure.

One last point. Some of the commentary I’ve read recently uses the prevalence of HFT volume in a dark pool as a proxy for how much opportunistic trading goes on in the pool. This is a very dangerous shortcut, because as I (and others) have written repeatedly, there is all different kinds of HFT. Some adds to liquidity, some consumes it, and some may be outright toxic/predatory. Market-making HFT can enhance dark pool liquidity, which is probably why dark pools encourage HFT participation. Indeed, it is hard to understand how a pool could benefit from encouraging the participation of predatory HFT, especially if it lets such firms trade for free. This drives away the paying customers, particularly when the paying customers evaluate the quality of their executions.

Evaluating execution quality and cost could be considered a form of institutional trader due diligence. Firms that do so can protect themselves-and their investor-clients-from opportunistic counterparties. Even though the executions are done in the dark, it is possible to shine a light on the results. The WSJ piece shows that many firms do just that. The question of whether additional regulation is needed boils down to the question of whether the cost and efficacy of these self-help efforts is superior to that of regulation.

Print Friendly

July 15, 2014

Oil Futures Trading In Troubled Waters

Filed under: Commodities,Derivatives,Economics,Energy,Exchanges,HFT,Regulation — The Professor @ 7:16 pm

A recent working paper by Pradeep Yadav, Michel Robe and Vikas Raman tackles a very interesting issue: do electronic market makers (EMMs, typically HFT firms) supply liquidity differently than locals on the floor during its heyday? The paper has attracted a good deal of attention, including this article in Bloomberg.

The most important finding is that EMMs in crude oil futures do tend to reduce liquidity supply during high volatility/stressed periods, whereas crude futures floor locals did not. They explain this by invoking an argument I did 20 years ago in my research comparing the liquidity of floor-based LIFFE to the electronic DTB: the anonymity of electronic markets makes market makers there more vulnerable to adverse selection. From this, the authors conclude that an obligation to supply liquidity may be desirable.

These empirical conclusions seem supported by the data, although as I describe below the scant description of the methodology and some reservations based on my knowledge of the data make me somewhat circumspect in my evaluation.

But my biggest problem with the paper is that it seems to miss the forest for the trees. The really interesting question is whether electronic markets are more liquid than floor markets, and whether the relative liquidity in electronic and floor markets varies between stressed and non-stressed markets. The paper provides some intriguing results that speak to that question, but then the authors ignore it altogether.

Specifically, Table 1 has data on spreads in from the electronic NYMEX crude oil market in 2011, and from the floor NYMEX crude oil market in 2006. The mean and median spreads in the electronic market: .01 percent. Given a roughly $100 price, this corresponds to one tick ($.01) in the crude oil market. The mean and median spreads in the floor market: .35 percent and .25 percent, respectively.

Think about that for a minute. Conservatively, spreads were 25 times higher in the floor market. Even adjusting for the fact that prices in 2011 were almost double than in 2006, we’re talking a 12-fold difference in absolute (rather than percentage) spreads. That is just huge.

So even if EMMs are more likely to run away during stressed market conditions, the electronic market wins hands down in the liquidity race on average. Hell, it’s not even a race. Indeed, the difference is so large I have a hard time believing it, which raises questions about the data and methodologies.

This raises another issue with the paper. The paper compares at the liquidity supply mechanism in electronic and floor markets. Specifically, it examines the behavior of market makers in the two different types of markets. What we are really interested is the outcome of these mechanisms. Therefore, given the rich data set, the authors should compare measures of liquidity in stressed and non-stressed periods, and make comparisons between the electronic and floor markets. What’s more, they should examine a variety of different liquidity measures. There are multiple measures of spreads, some of which specifically measure adverse selection costs. It would be very illuminating to see those measures across trading mechanisms and market environments. Moreover, depth and price impact are also relevant. Let’s see those comparisons too.

It is quite possible that the ratio of liquidity measures in good and bad times is worse in electronic trading than on the floor, but in any given environment, the electronic market is more liquid. That’s what we really want to know about, but the paper is utterly silent on this. I find that puzzling and rather aggravating, actually.

Insofar as the policy recommendation is concerned, as I’ve been writing since at least 2010, the fact that market makers withdraw supply during periods of market stress does not necessarily imply that imposing obligations to make markets even during stressed periods is efficiency enhancing. Such obligations force market makers to incur losses when the constraints bind. Since entry into market making is relatively free, and the market is likely to be competitive (the paper states that there are 52 active EMMS in the sample), raising costs in some state of the world, and reducing returns to market making in these states, will lead to the exit of market making capacity. This will reduce liquidity during unstressed periods, and could even lead to less liquidity supply in stressed periods: fewer firms offering more liquidity than they would otherwise choose due to an obligation may supply less liquidity in aggregate than a larger number of firms that can each reduce liquidity supply during stressed periods (because they are not obligated to supply a minimum amount of liquidity).

In other words, there is no free lunch. Even assuming that EMMs are more likely to reduce supply during stressed periods than locals, it does not follow that a market making obligation is desirable in electronic environments. The putatively higher cost of supplying liquidity in an electronic environment is a feature of that environment. Requiring EMMs to bear that cost means that they have to recoup it at other times. Higher cost is higher cost, and the piper must be paid. The finding of the paper may be necessary to justify a market maker obligation, but it is clearly not sufficient.

There are some other issues that the authors really need to address. The descriptions of the methodologies in the paper are far too scanty. I don’t believe that I could replicate their analysis based on the description in the paper. As an example, they say “Bid-Ask Spreads are calculated as in the prior literature.” Well, there are many papers, and many ways of calculating spreads. Hell, there are multiple measures of spreads. A more detailed statement of the actual calculation is required in order to know exactly what was done, and to replicate it or to explore alternatives.

Comparisons between electronic and open outcry markets are challenging because the nature of the data are very different. We can observe the order book at every instant of time in an electronic market. We can also sequence everything-quotes, cancellations and trades-with exactitude. (In futures markets, anyways. Due to the lack of clock synchronization across trading venues, this is a problem in a fragmented market like US equities.) These factors mean that it is possible to see whether EMMs take liquidity or supply it: since we can observe the quote, we know that if an EMM sells (buys) at the offer (bid) it is supplying liquidity, but if it buys (sells) at the offer (bid) it is consuming liquidity.

Things are not nearly so neat in floor trading data. I have worked quite a bit with exchange Street Books. They convey much less information than the order book and the record of executed trades in electronic markets like Globex. Street Books do not report the prevailing bids and offers, so I don’t see how it is possible to determine definitively whether a local is supplying or consuming liquidity in a particular trade. The mere fact that a local (CTI1) is trading with a customer (CTI4) does not mean the local is supplying liquidity: he could be hitting the bid/lifting the offer of a customer limit order, but since we can’t see order type, we don’t know. Moreover, even to the extent that there are some bids and offers in the time and sales record, they tend to be incomplete (especially during fast markets) and time sequencing is highly problematic. I just don’t see how it is possible to do an apples-to-apples comparison of liquidity supply (and particularly the passivity/aggressiveness of market makers) between floor and electronic markets just due to the differences in data. Nonetheless, the paper purports to do that. Another reason to see more detailed descriptions of methodology and data.

One red flag that indicates that the floor data may have some problems. The reported maximum bid-ask spread in the floor sample is 26.48 percent!!! 26.48 percent? Really? The 75th percentile spread is .47 percent. Given a $60 price, that’s almost 30 ticks. Color me skeptical. Another reason why a much more detailed description of methodologies is essential.

Another technical issue is endogeneity. Liquidity affects volatility, but the paper uses volatility as one of its measures of stressed markets in its study of how stress affects liquidity. This creates an endogeneity (circularity, if you will) problem. It would be preferable to use some instrument for stressed market conditions. Instruments are always hard to come up with, and I don’t have one off the top of my head, but Yanev et al should give some serious thought to identifying/creating such an instrument.

Moreover, the main claim of the paper is that EMMs’ liquidity supply is more sensitive to the toxicity of order flow than locals’ liquidity supply. The authors use order imbalance (CTI4 buys minus CTI4 sells, or the absolute value thereof more precisely), which is one measure of toxicity, but there are others. I would prefer a measure of customer (CTI4) alpha. Toxic (i.e., informed) order flow predicts future price movements, and hence when customer orders realize high alphas, it is likely that customers are more informed than usual and earn positive alphas. It would therefore be interesting to see the sensitivities of liquidity supply in the different trading environments to order flow toxicity as measured by CTI4 alphas.

I will note yet again that market maker actions to cut liquidity supply when adverse selection problems are severe is not necessarily a bad thing. Informed trading can be a form of rent seeking, and if EMMs are better able to detect informed trading and withdraw liquidity when informed trading is rampant, this form of rent seeking may be mitigated. Thus, greater sensitivity to toxicity could be a feature, not a bug.

All that said, I consider this paper a laudable effort that asks serious questions, and attempts to answer them in a rigorous way. The results are interesting and plausible, but the sketchy descriptions of the methodologies gives me reservations about these results. But by far the biggest issue is that of the forest and trees. What is really interesting is whether electronic markets are more or less liquid in different market environments than floor markets. Even if liquidity supply is flightier in electronic markets, they can still outperform floor based markets in both unstressed and stressed environments. The huge disparity in spreads reported in the paper suggests a vast difference in liquidity on average, which suggests a vast difference in liquidity in all different market environments, stressed and unstressed. What we really care about is liquidity outcomes, as measured by spreads, depth, price impact, etc. This is the really interesting issue, but one that the paper does not explore.

But that’s the beauty of academic research, right? Milking the same data for multiple papers. So I suggest that Pradeep, Michel and Vikas keep sitting on that milking stool and keep squeezing that . . . data ;-) Or provide the data to the rest of us out their and let us give it a tug.

Print Friendly

July 11, 2014

25 Years Ago Today Ferruzzi Created the Streetwise Professor

Filed under: Clearing,Commodities,Derivatives,Economics,Exchanges,HFT,History,Regulation — The Professor @ 9:03 am

Today is the 25th anniversary of the most important event in my professional life. On 11 July, 1989, the Chicago Board of Trade issued an Emergency Order requiring all firms with positions in July 1989 soybean futures in excess of the speculative limit to reduce those positions to the limit over five business days in a pro rata fashion (i.e., 20 percent per day, or faster). Only one firm was impacted by the order, Italian conglomerate Ferruzzi, SA.

Ferruzzi was in the midst of an attempt to corner the market, as it had done in May, 1989. The EO resulted in a sharp drop in soybean futures prices and a jump in the basis: for instance, by the time the contract went off the board on 20 July, the basis at NOLA had gone from zero to about 50 cents, by far the largest jump in that relationship in the historical record.

The EO set off a flurry of legal action. Ferruzzi tried to obtain an injunction against the CBT. Subsequently, farmers (some of whom had dumped truckloads of beans at the door of the CBT) sued the exchange. Moreover, a class action against Ferruzzi was also filed. These cases took years to wend their ways through the legal system. The farmer litigation (in the form of Sanner v. CBT) wasn’t decided (in favor of the CBT) until the fall of 2002. The case against Ferruzzi lasted somewhat less time, but still didn’t settle until 2006.

I was involved as an expert in both cases. Why?

Well, pretty much everything in my professional career post-1990 is connected to the Ferruzzi corner and CBT EO, in a knee-bone-connected-to-the-thigh-bone kind of way.

The CBT took a lot of heat for the EO. My senior colleague, the late Roger Kormendi, convinced the exchange to fund an independent analysis of its grain and oilseed markets to attempt to identify changes that could prevent a recurrence of the episode. Roger came into my office at Michigan, and told me about the funding. Knowing that I had worked in the futures markets before, asked me to participate in the study. I said that I had only worked in financial futures, but I could learn about commodities, so I signed on: it sounded interesting, my current research was at something of a standstill, and I am always up for learning something new. I ended up doing about 90 percent of the work and getting 20 percent of the money :-P but it was well worth it, because of the dividends it paid in the subsequent quarter century. (Putting it that way makes me feel old. But this all happened when I was a small child. Really!)

The report I (mainly) wrote for the CBT turned into a book, Grain Futures Contracts: An Economic Appraisal. (Available on Amazon! Cheap! Buy two! I see exactly $0.00 of your generous purchases.) Moreover, I saw the connection between manipulation and industrial organization economics (which was my specialization in grad school): market power is a key concept in both. So I wrote several papers on market power manipulation, which turned into a book . (Also available on Amazon! And on Kindle: for some strange reason, it was one of the first books published on Kindle.)

The issue of manipulation led me to try to understand how it could best be prevented or deterred. This led me to research self-regulation, because self-regulation was often advanced as the best way to tackle manipulation. This research (and the anthropological field work I did working on the CBT study) made me aware that exchange governance played a crucial role, and that exchange  governance was intimately related to the fact that exchanges are non-profit firms. So of course I had to understand why exchanges were non-profits (which seemed weird given that those who trade on them are about as profit-driven as you can get), and why they were governed in the byzantine, committee-dominated way they were. Moreover, many advocates of self-regulation argued that competition forced exchanges to adopt efficient rules. Observing that exchanges in fact tended to be monopolies, I decided I needed to understand the economics of competition between execution venues in exchange markets. This caused me to write my papers on market macrostructure, which is still an active area of investigation: I am writing a book on that subject. This in turn produced many of the conclusions that I have drawn about HFT, RegNMS, etc.

Moreover, given that I concluded that self-regulation was in fact a poor way to address manipulation (because I found exchanges had poor incentives to do so), I examined whether government regulation or private legal action could do better. This resulted in my work on the efficiency of ex post deterrence of manipulation. My conclusions about the efficiency of ex post deterrence rested on my findings that manipulated prices could be distinguished reliably from competitive prices. This required me to understand the determinants of competitive prices, which led to my research on the dynamics of storable commodity prices that culminated in my 2011 book. (Now available in paperback on Amazon! Kindle too.)

In other words, pretty much everything in my CV traces back to Ferruzzi. Even the clearing-related research, which also has roots in the 1987 Crash, is due to Ferruzzi: I wouldn’t have been researching any derivatives-related topics otherwise.

My consulting work, and in particular my expert witness work, stems from Ferruzzi. The lead counsel in the class action against Ferruzzi came across Grain Futures Contracts in the CBT bookstore (yes, they had such a thing back in the day), and thought that I could help him as an expert. After some hesitation (attorneys being very risk averse, and hence reluctant to hire someone without testimonial experience) he hired me. The testimony went well, and that was the launching pad for my expert work.

I also did work helping to redesign the corn and soybean contracts at the CBT, and the canola contract in Winnipeg: these redesigned contracts (based on shipping receipts) are the ones traded today. Again, this work traces its lineage to Ferruzzi.

Hell, this was even my introduction to the conspiratorial craziness that often swirls around commodity markets. Check out this wild piece, which links Ferruzzi (“the Pope’s soybean company”) to Marc Rich, the Bushes, Hillary Clinton, Vince Foster, and several federal judges. You cannot make up this stuff. Well, you can, I guess, as a quick read will soon convince you.

I have other, even stranger connections to Hillary and Vince Foster which in a more indirect way also traces its way back to Ferruzzi. But that’s a story for another day.

There’s even a Russian connection. One of Ferruzzi’s BS cover stories for amassing a huge position was that it needed the beans to supply big export sales to the USSR. These sales were in fact fictitious.

Ferruzzi was a rather outlandish company that eventually collapsed in 1994. Like many Italian companies, it was leveraged out the wazoo. Moreover, it had become enmeshed in the Italian corruption/mob investigations of the early 1990s, and its chairman Raul Gardini, committed suicide in the midst of the scandal.

The traders who carried out the corners were located in stylish Paris, but they were real commodity cowboys of the old school. Learning about that was educational too.

To put things in a nutshell. Some crazy Italians, and English and American traders who worked for them, get the credit-or the blame-for creating the Streetwise Professor. Without them, God only knows what the hell I would have done for the last 25 years. But because of them, I raced down the rabbit hole of commodity markets. And man, have I seen some strange and interesting things on that trip. Hopefully I will see some more, and if I do, I’ll share them with you right here.

Print Friendly

July 8, 2014

The Securities Market Structure Regulation Book Club

Filed under: Derivatives,Economics,Exchanges,Politics,Regulation — The Professor @ 4:30 pm

There was another hearing on HFT on Capitol Hill today, in the Senate. The best way to summarize it was that it reminded me of an evening at the local bookstore, with authors reading selections from their books.

Two examples suffice. Citadel’s Ken Griffin (whom I called out for talking his book on Frankendodd years ago) heavily criticized dark pools, and called for much heavier regulation of them. But he sang the praises of purchased order flow, and warned against any regulation of it.

So, go out on a limb and bet that (a) Citadel does not operate a dark pool, and (b) Citadel is one of the biggest purchasers of order flow, and you’ll be a winner!

The intellectually respectable case against dark pools and payment for order flow is the same. Both “cream skim” uninformed orders from the exchanges, leaving the exchange order flow more informed (i.e., more toxic), thereby reducing exchange liquidity by increasing adverse selection costs. I’m not saying that I agree with this case, but I do recognize that it is at least grounded in economics, and that an intellectually consistent critic of dark pools would also criticize purchased order flow.

But some people have books to sell.

The other example is Jeffrey Sprecher of ICE, which owns and operates the NYSE. Sprecher lamented the fragmentation of the equity markets, and praised the lack of fragmentation of futures markets. But he went further. He said that futures markets were competitive and not fragmented.

Tell me another one.

Yes, there is limited head-to-head competition in some futures contracts, such as WTI and Brent. But these are the exceptions, not the rule. Futures exchanges do not compete head to head in any other major contract. Execution in the equity market is far more competitive than in the futures market. Multiple equities exchanges compete vigorously, and the socialization of order flow due to RegNMS makes that competition possible. This is why the equities exchange business is low margin, and not very profitable. Futures exchanges own their order flow, and since liquidity attracts liquidity, one exchange tends to dominate trading in a particular instrument. So yes, futures markets are not fragmented, but no, they are not competitive. These things go together, regardless of what Sprecher says.  He wants to go back to the day when the NYSE was the dominant exchange and its members earned huge rents. That requires undoing a lot of what is in RegNMS.

Those were some of the gems from the witness side of the table. From the questioner side, we were treated to another display of Elizabeth Warren’s arrogant ignorance and idiocy. The scary thought is that the left views her as the next Obama who will deny Hillary and vault to the presidency. God save us.

Overall the hearing demonstrated what I’ve been saying for years. Market structure, and the regulations that drive market structure, have huge distributive effects. Everybody says that they are in favor of efficient markets, but I’m sure you’ll be shocked to learn that their definition of what is efficient happens to correspond with what benefits their firms. The nature of securities/derivatives trading creates rents. The battle over market structure is a classic rent seeking struggle. In rent seeking struggles, everybody reads out of their books. Everybody.

Print Friendly

Next Page »

Powered by WordPress