Streetwise Professor

July 26, 2017

Europe Has Always Been at War With the Diesel Engine!

Filed under: China,Climate Change,Economics,Energy,Politics,Regulation — The Professor @ 7:26 pm

Europe is at war with the diesel engine. Paris, Madrid, and Athens will ban diesels starting in 2025. Even Stuttgart (home of Daimler and Porsche) and Munich (home of BMW) are following suit. France and Britain have pledged to eliminate internal combustion engine cars by 2040.   The cars–diesel in particular–are too polluting, you see. And so the Euros are intent on replacing them with electric vehicles.

Europe has always been at war with diesel!

Um, not really. Like Oceania and East Asia, Europe and diesel were once fast allies. In its early days of the fight against climate change, Europe figured that since diesel engines burn fuel more efficiently than gasoline ones, they could reduce carbon emissions by forcing or inducing people to switch to diesel. They gave tax breaks and incentives that led to 1/3 of the European car fleet being diesel.

Then reality crept in. Diesels create more particulates, which create nasty pollution, particularly in urban areas. The Euros thought they could address this by strict emissions standards. So strict, that auto companies couldn’t meet them economically. So they lied and cheated. Brace yourself: even morally superior German companies lied and cheated! So Europe bribed people to pollute their cities. Well played!

Further, even by its own objectives the policy was a failure. Even though diesel has lower CO2 emissions, it has higher soot emissions–and soot contributes to warming. Whoops! Further, the CO2 advantage of diesel has been narrowing over the years, due to improvements in gasoline engine technology. So at best the impact of diesel on warming has been a push, and maybe a net bad.

But never fear! The same geniuses who forced diesel down Europe’s throat have a solution to the evils of diesel: they will force electric cars down people’s throats.

What could possibly go wrong?

Well, off the top of my head.

First, in the near term, a good portion of electric cars will be powered by electricity generated by coal. This is especially true if China goes Europe’s way.

Second, the green wet dream is for renewables to replace coal. Don’t even get me started. Renewables are diffuse and intermittent–they don’t scale well. They have caused problems in the power grid wherever (e.g., Europe, California) they have accounted for over 10 percent or so of generation. They consume vast amounts of land: air pollution (if you believe CO2 is a pollutant) is replaced by sight pollution and the destruction of natural habitat and foodstuff producing land. Renewables are a static technology (e.g., the amount of wind generation is limited by physical laws), whereas internal combustion technology has been improving continuously since its introduction in the 19th century. Really economic renewables generation will require a revolution in large-scale storage technology–a revolution that people have been waiting for for decades, but which hasn’t appeared.

Third, disposal of batteries is an environmental nightmare.

Fourth, mining the materials to produce batteries is an environmental nightmare–and is likely to benefit many kleptocrats around the world. Are greens really all that excited about massive mines for rare earths (notoriously polluting) and copper springing up to provide the materials for their dream machines? Will they pass laws against, say, blood cobalt? (And when they do, will they acknowledge–even to themselves–their culpability? Put me down as a “no.”)

Fifth, depending on the fuel mix, carbon emissions over an EV’s lifetime are not that much lower than those of an internal combustion car using existing technology–and that technology (as noted above) will improve.

Like I say, top of my head. But there’s an even bigger reason:

Sixth, unintended consequences, or more prosaically, shit happens. Just like the diesel box of chocolates was full of things the Euro better thans didn’t expect, and didn’t like upon consuming, the EV craze will also present unintended and unexpected effects, and in this type of circumstance, these effects are usually negative.

But they know better! How do we know? Because they keep telling us so! And because they keep telling us what to do!  Despite the fact that their actual record of performance is a litany of failures. (I cleaned that up. My initial draft had a word starting with “cluster.”)

Given such a track record, people with any decency would exercise some restraint and have some humility before embarking on another attempt to dictate technology. But no, that’s not the elite’s way. That’s not the bureaucrats’ way. They have learned nothing and forgotten nothing and will continue to prove that until someone stops them. Sadly, short of revolution it’s hard to see how that can happen.

Almost all attempts by states to dictate technology are utter fiascos. The knowledge problem is bigger here than anywhere, and the feedbacks are devilishly complex and hard to predict. Look at something seemingly as prosaic and well-understood as the production of oil and gas. Ten or twelve years ago, only a few visionaries glimpsed the potential of fracking, and I doubt that even they would admit that they foresaw the transformation that has occurred. Trying to dictate a technology that is dependent on myriad other technologies, and which may be rendered obsolete by technologies not yet developed, is something that only fools do.

But alas, there are many fools in high places.

The Orwellian switch from Europe and Diesel Have Always Been Allies to Europe Has Always Been at War With Diesel is particularly revealing because rather than recognize that the experience of Europe’s pro-diesel policy makes a mockery of policymakers pretenses of foresight, the failure of that policy is spurring them to embark on an even more speculative binge of coercion!

If you think CO2 is an issue, tax CO2 and let the market figure out the optimal way of reducing emissions: there are many margins on which to adjust, including technical innovation, fuel substitution, changes in lifestyle. Yet these madmen (and women) and fools insist on dictating technology right after their past dictates have proved failures. Worse than that: they are issuing new ukases because their old ones were crashing failures.

We are in the best of hands.

Print Friendly

July 6, 2017

SWP Acid Flashback, CCP Edition

Filed under: Clearing,Derivatives,Economics,Financial crisis,Regulation — The Professor @ 6:09 pm

Sometimes reading current news about clearing specifically and post-crisis regulation generally triggers acid flashbacks to old blog posts. Like this one (from 2010!):

[Gensler’s] latest gurgling appears on the oped page of today’s WSJ.  It starts with a non-sequitur, and careens downhill from there.  Gensler tells a story about his role in the LTCM situation, and then claims that to prevent a recurrence, or a repeat of AIG, it is necessary to reduce the “cancerous interconnections” (Jeremiah Recycled Bad Metaphor Alert!) in the financial system by, you guessed it, mandatory clearing.

Look.  This is very basic.  Do I have to repeat it?  CLEARING DOES NOT ELIMINATE INTERCONNECTIONS AMONG FINANCIAL INSTITUTIONS.  At most, it reconfigures the topology of the network of interconnections.  Anyone who argues otherwise is not competent to weigh in on the subject, let alone to have regulatory responsibility over a vastly expanded clearing system.  At most you can argue that the interconnections in a cleared system are better in some ways than the interconnections in the current OTC structure.  But Gensler doesn’t do that.   He just makes unsupported assertion after unsupported assertion.

Jeremiah’s latest gurgling appears on the oped page of today’s WSJ.  It starts with a non-sequitur, and careens downhill from there.  Gensler tells a story about his role in the LTCM situation, and then claims that to prevent a recurrence, or a repeat of AIG, it is necessary to reduce the “cancerous interconnections” (Jeremiah Recycled Bad Metaphor Alert!) in the financial system by, you guessed it, mandatory clearing. Look.  This is very basic.  Do I have to repeat it?  CLEARING DOES NOT ELIMINATE INTERCONNECTIONS AMONG FINANCIAL INSTITUTIONS.  At most, it reconfigures the topology of the network of interconnections.  Anyone who argues otherwise is not competent to weigh in on the subject, let alone to have regulatory responsibility over a vastly expanded clearing system.  At most you can argue that the interconnections in a cleared system are better in some ways than the interconnections in the current OTC structure.  But Gensler doesn’t do that.   He just makes unsupported assertion after unsupported assertion.

So what triggered this flashback? This recent FSB (no! not Putin!)/BIS/IOSCO report on . . . wait for it . . . interdependencies in clearing. As summarized by Reuters:

The Financial Stability Board, the Committee on Payments and Market Infrastructures, the International Organization of Securities Commissioners and the Basel Committee on Banking Supervision, also raised new concerns around the interdependency of CCPs, which have become crucial financial infrastructures as a result of post-crisis reforms that forced much of the US$483trn over-the-counter derivatives market into central clearing.

In a study of 26 CCPs across 15 jurisdictions, the committees found that many clearinghouses maintain relationships with the same financial entities.

Concentration is high with 88% of financial resources, including initial margin and default funds, sitting in just 10 CCPs. Of the 307 clearing members included in the analysis, the largest 20 accounted for 75% of financial resources provided to CCPs.

More than 80% of the CCPs surveyed were exposed to at least 10 global systemically important financial institutions, the study showed.

In an analysis of the contagion effect of clearing member defaults, the study found that more than half of surveyed CCPs would suffer a default of at least two clearing members as a result of two clearing member defaults at another CCP.

This suggests a high degree of interconnectedness among the central clearing system’s largest and most significant clearing members,” the committees said in their analysis.

To reiterate: as I said in 2010 (and the blog post echoed remarks that I made at ISDA’s General Meeting in San Fransisco shortly before I wrote the post), clearing just reconfigures the topology of the network. It does not eliminate “cancerous interconnections”. It merely re-jiggers the connections.

Look at some of the network charts in the FSB/BIS/IOSCO report. They are pretty much indistinguishable from the sccaaarrry charts of interdependencies in OTC derivatives that were bruited about to scare the chillin into supporting clearing and collateral mandates.

The concentration of clearing members is particularly concerning. The report does not mention it, but this concentration creates other major headaches, such as the difficulties of porting positions if a big clearing member (or two) defaults. And the difficulties this concentration would produce in trying to auction off or hedge the positions of the big clearing firms.

Further, the report understates the degree of interconnections, and in fact ignores some of the most dangerous ones. It looks only at direct connections, but the indirect connections are probably more . . . what’s the word I’m looking for? . . . cancerous–yeahthat’s it. CCPs are deeply embedded in the liquidity supply and credit network, which connects all major (and most minor) players in the market. Market shocks that cause big price changes in turn cause big variation margin calls that reverberate throughout the entire financial system. Given the tight coupling of the liquidity system generally, and the particularly tight coupling of the margining mechanism specifically, this form of interconnection–not considered in the report–is most laden with systemic ramifications. As I’ve said ad nauseum: the connections that are intended to prevent CCPs from failing are exactly the ones that pose the greatest threat to the entire system.

To flash back to another of my past writings: this recent report, when compared to what Gensler said in 2010 (and others, notably Timmy!, were singing from the same hymnal), shows that clearing and collateral mandates were a bill of goods. These mandates were sold on the basis of lies large and small. And the biggest lie–and I said so at the time–was that clearing would reduce the interconnectivity of the financial system. So the FSB/BIS/IOSCO have called bullshit on Gary Gensler. Unfortunately, seven years too late.

 

Print Friendly

July 1, 2017

All Flaws Great and Small, Frankendodd Edition

On Wednesday I had the privilege to deliver the keynote at the FOW Trading Chicago event. My theme was the fundamental flaws in Frankendodd–you’re shocked, I’m sure.

What I attempted to do was to categorize the errors. I identified four basic types.

Unintended consequences contrary to the objectives of DFA. This could also be called “counter-intended consequences”–not just unintended, but the precise opposite of the stated intent. The biggest example is, well, related to bigness. If you wanted to summarize a primary objective of DFA, it would be “to reduce the too big to fail problem.” Well, the very nature of DFA means that in some ways it exacerbates TBTF. Most notably, the resulting regulatory burdens actually favor scale, because they impose largely fixed costs. I didn’t mention this in my talk, but a related effect is that increasing regulation leads to greater influence activities by the regulated, and for a variety of reasons this tends to favor the big over the medium and small.

Perhaps the most telling example of the perverse effects of DFA is that it has dramatically increased concentration among FCMs. This exacerbates a variety of sources of systemic risk, including concentration risk at CCPs; difficulties in managing defaulted positions and porting the positions of the customers of troubled FCMs; and greater interconnections across CCPs. Concentration also fundamentally undermines the ability of CCPs to mutualize default risk. It can also create wrong-way risks as the big FCMs are in some cases also sources of liquidity support to CCPs.

I could go on.

Creation of new risks due to misdiagnoses of old risks. The most telling example here is the clearing and collateral mandates, which were predicated on the view that too much credit was extended via OTC derivatives transactions. Collateral and netting were expected to reduce this credit risk.

This is a category error. For one thing, it embodies a fallacy of composition: reducing credit in one piece of an interconnected financial system that possesses numerous ways to create credit exposures does not necessarily reduce credit risk in the system as a whole. For another, even to the extent that reducing credit extended via derivatives transactions reduces overall credit exposures in the financial system, it does so by creating another risk–liquidity risk. This risk is in my view more pernicious for many reasons. One reason is that it is inherently wrong-way in nature: the mandates increase demands for liquidity precisely during those periods in which liquidity supply typically contracts. Another is that it increases the tightness of coupling in the financial system. Tight coupling increases the risk of catastrophic failure, and makes the system more vulnerable to a variety of different disruptions (e.g., operational risks such as the temporary failure of a part of the payments system).

As the Clearing Cassandra I warned about this early and often, to little avail–and indeed, often to derision and scorn. Belatedly regulators are coming to an understanding of the importance of this issue. Fed governor Jerome Powell recently emphasized this issue in a speech, and recommended CCPs engage in liquidity stress testing. In a scathing report, the CFTC Inspector General criticized the agency’s cost-benefit analysis of its margin rules for non-cleared swaps, based largely on its failure to consider liquidity effects. (The IG report generously cited my work several times.

But these are at best palliatives. The fundamental problem is inherent in the super-sizing of clearing and margining, and that problem is here to stay.

Imposition of “solutions” to non-existent problems. The best examples of this are the SEF mandate and position limits. The mode of execution of OTC swaps was not a source of systemic risk, and was not problematic even for reasons unrelated to systemic risk. Mandating a change to the freely-chosen modes of transaction execution has imposed compliance costs, and has also resulted in a fragmented swaps market: those who can escape the mandate (e.g., European banks trading € swaps) have done so, leading to bifurcation of the market for € swaps, which (a) reduces competition (another counter-intended consequence), and (b) reduces liquidity (also counter-intended).

The non-existence of a problem that position limits could solve is best illustrated by the pathetically flimsy justification for the rule set out in the CFTC’s proposal: the main example the CFTC mentioned is the Hunt silver episode. As I said during my talk, this is ancient history: when do we get to the Trojan War? If anything, the Hunts are the exception that proves the rule. The CFTC also pointed to Amaranth, but (a) failed to show that Amaranth’s activities caused “unreasonable and unwarranted price fluctuations,” and (b) did not demonstrate that (unlike the Hunt case) that Amaranth’s financial distress posed any threat to the broader market or any systemic risk.

It is sickly amusing that the CFTC touts that based on historical data, the proposed limits would constrain few, if any market participants. In other words, an entire industry must bear the burden of complying with a rule that the CFTC itself says would seldom be binding. Makes total sense, and surely passes a rigorous cost-benefit test! Constraining positions is unlikely to affect materially the likelihood of “unreasonable and unwarranted price fluctuations”. Regardless, positions are not likely to be constrained. Meaning that the probability that the regulation reduces such price fluctuations is close to zero, if not exactly equal to zero. Yet there would be an onerous, and ongoing cost to compliance. Not to mention that when the regulation would in fact bind, it would potentially constrain efficient risk transfer.

The “comma and footnote” problem. Such a long and dense piece of legislation, and the long and detailed regulations that it has spawned, inevitably contain problems that can lead to protracted disputes, and/or unpleasant surprises. The comma I refer to is in the position limit language of the DFA itself: as noted in the court decision that stymied the original CFTC position limit rule, the placement of the comma affects whether the language in the statute requires the CFTC to impose limits, or merely gives it the discretionary authority to do so in the even that it makes an explicit finding that the limits are required to reduce unwarranted and unreasonable price fluctuations. The footnotes I am thinking of were in the SEF rule: footnote 88 dramatically increased the scope of the rule, while footnote 513 circumscribed it.

And new issues of this sort crop up regularly, almost 7 years after the passage of Dodd-Frank. Recently Risk highlighted the fact that in its proposal for capital requirements on swap dealers, the CFTC (inadvertently?) potentially made it far more costly for companies like BP and Shell to become swap dealers. Specifically, whereas the Fed defines a financial company as one in which more than 85 percent of its activities are financial in nature, the CFTC proposes that a company can take advantage of more favorable capital requirements if its financial activities are less than 15 percent of its overall activities. Meaning, for example, a company with 80 percent financial activity would not count as a financial company under Fed rules, but would under the proposed CFTC rule. This basically makes it impossible for predominately commodity companies like BP and Shell to take advantage of preferential capital treatment specifically included for them and their ilk in DFA. To the extent that these firms decide to incur costs (higher capital costs, or the cost of reorganizing their businesses to escape the rule’s bite) and become swap dealers nonetheless, that cost will not generate any benefit. To the extent that they decide that it is not worth the cost, the swaps market will be more concentrated and less competitive (more counter-intended effects).

The position limits proposed regs provide a further example of this devil-in-the-details problem. The idea of a hedging carveout is eminently sensible, but the specifics of the CFTC’s hedging exemptions were unduly restrictive.

I could probably add more categories to the list. Different taxonomies are possible. But I think the foregoing is a useful way of thinking about the fundamental flaws in Frankendodd.

I’ll close with something that could make you feel better–or worse! For all the flaws in Frankendodd, MiFID II and EMIR make it look like a model of legislative and regulatory wisdom. The Europeans have managed to make errors in all of these categories–only more of them, and more egregious ones. For instance, as bad as the the US position limit proposal is, it pales in comparison to the position limit regulations that the Europeans are poised to inflict on their firms and their markets.

 

Print Friendly

May 30, 2017

Clearing Fragmentation Follies: We’re From the European Commission, and We’re Here to Help You

Filed under: Clearing,Derivatives,Economics,Financial Crisis II,Politics,Regulation — The Professor @ 6:33 am

Earlier this month came news that the European Commission was preparing legislation that would require clearing of Euro derivatives to take place in the Eurozone, rather than in the UK, which presently dominates. This has been an obsession with the Euros since before Brexit: Brexit has only intensified the efforts, and provided a convenient rationalization for doing so.

The stated rationale is that the EU (and the ECB) need regulatory control over clearing of Euro-denominated derivatives because a problem at the CCP that clears them could have destabilizing effects on the Eurozone, and could necessitate the ECB providing liquidity support to the CCP in the event of trouble. If they are going to support it in extremis, they are going to need to have oversight, they claim.

Several things to note here. First, it is possible to have a regulatory line of sight without having jurisdiction. Note that the USD clearing business at LCH is substantially larger than the € clearing business there, yet the Fed, the Treasury, and Congress are fine with that, and are not insisting that all USD clearing be done stateside. They realize that there are other considerations (which I discuss more below): to simplify, they realize that London has become a dominant clearing center for good economic reasons, and that the economies of scale and scope clearing mean that concentration of clearing produces some efficiencies. Further, they realize that it is possible to have sufficient information to ensure that the foreign-domiciled CCP is acting prudently and not taking undue risks.

Canada is another example. A few years ago I wrote a white paper (under the aegis of the Canadian Market Infrastructure Committee) that argued that it would be efficient for Canada to permit clearing of C$ derivatives in London, rather than to require the establishment and use of a Canadian CCP. The Bank of Canada and the Canadian government agreed, and did not mandate the creation of a maple leaf CCP.

Second, if the Europeans think that by moving € clearing away from LCH that they will be immune from any problems there, they are sadly mistaken. The clearing firms that dominate in LCH will also be dominant in any Europe-domiciled € CCP, and a problem at LCH will be shared with the Euro CCP, either because the problem arises because of a problem at a firm that is a clearing member of both, or because an issue at LCH not originally arising from a CM problem will adversely affect all its CMs, and hence be communicated to other CCPs.  Consider, for example, the self-preserving way that LCH acted in the immediate aftermath of Brexit: this put liquidity demands on all its clearing members. With fragmented clearing, these strains would have been communicated to a Eurozone CCP.

When risks are independent, diversification and redundancy tend to reduce risk of catastrophic failure: when risks are not independent, they can either fail to reduce the risk substantially, or actually increase it. For instance, if the failure of CCP 1 likely causes the failure of CCP 2, having two CCPs actually increases the probability of a catastrophe (given a probability of CCP failure). CCP risks are not independent, but highly dependent. This means that fragmentation could well increase the problem of a clearing crisis, and is unlikely to reduce it.

This raises another issue: dealing with a crisis will be more complicated, the more fragmented is clearing. Two self-preserving CCPs have an incentive to take actions that may well hurt the other. Relatedly, managing the positions of a defaulted CM will be more complicated because this requires coordination across self-interested CCPs. Due to the breaking of netting sets, liquidity strains during a crisis are likely to be greater in a crisis with multiple CCPs (and here is where the self-preservation instincts of the two CCPs are likely to present the biggest problems).

Thus, (a) it is quite likely that fragmentation of clearing does not reduce, and may increase, the probability of a systemic shock involving CCPs, and (b) conditional on some systemic event, fragmented CCPs will respond less effectively than a single one.

The foregoing relates to how CCP fragmentation will affect markets during a systemic event. Fragmentation also affects the day-to-day economics of clearing. The breaking of netting sets resulting from the splitting off of € will increase collateral requirements. Perverse regulations, such as Basel III’s insistence on treating customer collateral as a CM asset against which capital must be held per the leverage requirement, will cause the collateral increase to increase substantially of providing clearing services.

Fragmentation will also result in costly duplication of activities, both across CCPs, and across CMs. For instance, it will entail duplicative oversight of CMs that clear both at LCH and the Eurozone CCP, and CMs that are members of both will have to staff separate interfaces with each. There will also be duplicative investments in IT (and the greater the number of IT potential points of failure, the greater the likelihood of at least one failure, which is almost certain to have deleterious consequences for CMs, and the other CCP). Fragmentation will also interfere with information flows, and make it likely that each CCP has less information than an integrated CCP would have.

This article raises another real concern: a Eurozone clearer is more likely to be subject to political pressure than the LCH. It notes that the Continentals were upset about the LCH raising haircuts on Eurozone sovereigns during the PIIGS crisis. In some future crisis (and there is likely to be one) the political pressure to avoid such moves will be intense, even in the face of a real deterioration of the creditworthiness of one or more EU states. Further upon a point made above, political pressures in the EU and the UK could exacerbate the self-preserving actions that could lead to a failure to achieve efficient cooperation in a crisis, and indeed, could lead to a catastrophic coordination failure.

In sum, it’s hard to find an upside to the forced repatriation of € clearing from LCH to some Eurozone entity. Both in wartime (i.e., a crisis) and in peacetime, there are strong economies of scale and scope in clearing. A forced breakup will sacrifice these economies. Indeed, since breaking up CCPs is unlikely to reduce the probability of a clearing-related crisis, but will make the crisis worse when it does occur, it is particularly perverse to dress this up as a way of protecting the stability of the financial system.

I also consider it sickly ironic that the Euros say, well, if we are expected to provide a liquidity backstop to a big financial entity, we need to have regulatory control. Um, just who was supplying all that dollar liquidity via swap lines to desperate European banks during the 2008-2009 crisis? Without the Fed, European banks would have failed to obtain the dollar funding they needed to survive. By the logic of the EC in demanding control of € clearing, the Fed should require that the US have regulatory authority over all banks borrowing and lending USD.

Can you imagine the squealing in Brussels and every European capital in response to any such demand?

Speaking of European capitals, there is another irony. One thing that may derail the EC’s clearing grab is a disagreement over who should have primary regulatory responsibility over a Eurozone CCP. The ECB and ESMA think the job should be theirs: Germany, France, and Italy say nope, this should be the job of national central banks  (e.g., the Bundesbank) or national financial regulators (e.g., Bafin).

So, hilariously, what may prevent (or at least delay) the fragmentation of clearing is a lack of political unity in the EU.  This is as good an illustration as any of the fundamental tensions within the EU. Everybody wants a superstate. As long as they are in control.

Ronald Reagan famously said that the nine scariest words in the English language are: “I’m from the government and I’m here to help.” I can top that: “I’m from the EC, and I’m here to help.” When it comes to demanding control of clearing, the EC’s “help” will be about as welcome as a hole in the head.

 

Print Friendly

May 6, 2017

Son of Glass-Steagall: A Nostrum, Prescribed by Trump

Filed under: Economics,Financial crisis,History,Politics,Regulation — The Professor @ 7:30 pm

Apologies for the posting hiatus. I was cleaning out my mother’s house in preparation for her forthcoming move, a task that vies with the Labors of Hercules. I intended to post, but I was just too damn tired at the end of each day.

I’ll ease back into things by giving a heads up on my latest piece in The Hill, in which I argue that reviving Glass-Steagall’s separation of commercial and investment banking is a solution in search of a problem. One thing that I find telling is that the problem the original was intended to address in the 1930s was totally different than the one that is intended to address today. Further, the circumstances in the 1930s were wildly different from present conditions.

In the 1930s, the separation was intended to prevent banks from fobbing off bad commercial and sovereign loans to unwitting investors through securities underwriting. This problem in fact did not exist: extensive empirical evidence has shown that debt securities underwritten by universal banks (like J.P. Morgan) were of higher quality and performed better ex post than debt underwritten by stand alone investment banks. Further, the  most acute problem of the US banking system was not too big to fail, but too small to succeed. The banking crisis of the 1930s was directly attributable to the fragmented nature of the US banking system, and the proliferation of thousands of small, poorly diversified, thinly capitalized banks. The bigger national banks, and in particular the universal ones, were not the problem in 1932-33. Further, as Friedman-Schwartz showed long ago, a blundering Fed implemented policies that were fatal to such a rickety system.

In contrast, today’s issue is TBTF. But, as I note in The Hill piece, and have written here on occasion, Glass-Steagall separation would not have prevented the financial crisis. The institutions that failed were either standalone investment banks, GSE’s, insurance companies involved in non-traditional insurance activities, or S&Ls. Universal banks that were shaky (Citi, Wachovia) were undermined by traditional lending activities. Wachovia, for instance, was heavily exposed to mortgage lending through its acquisition of a big S&L (Golden West Financial). There was no vector of contagion between the investment banking activities and the stability of any large universal bank.

As I say in The Hill, whenever the same prescription is given for wildly different diseases, it’s almost certainly a nostrum, rather than a cure.

Which puts me at odds with Donald Trump, for he is prescribing this nostrum. Perhaps in an effort to bring more clicks to my oped, the Monday after it appeared Trump endorsed a Glass-Steagall revival. This was vintage Trump. You can see his classic MO. He has a vague idea about a problem–TBTF. Not having thought deeply about it, he seizes upon a policy served up by one of his advisors (in this case, Gary Cohn, ex-Goldman–which would benefit from a GS revival), and throws it out there without much consideration.

The main bright spot in the Trump presidency has been his regulatory rollback, in part because this is one area in which he has some unilateral authority. Although I agree generally with this policy, I am under no illusions that it rests on deep intellectual foundations. His support of Son of Glass-Steagall shows this, and illustrates that no one (including Putin!) should expect an intellectually consistent (or even coherent) policy approach. His is, and will be, an instinctual presidency. Sometimes his instincts will be good. Sometimes they will be bad. Sometimes his instincts will be completely contradictory–and the call for a return to a very old school regulation in the midst of a largely deregulatory presidency shows that quite clearly.

 

Print Friendly

April 15, 2017

Is the Order Handling Rule Necessary to Ensure Intense Competition in Securities Markets?

Filed under: Commodities,Derivatives,Economics,Exchanges,Regulation — The Professor @ 2:01 pm

A couple of weeks back Acting SEC Chairman Mike Piwowar announced a new Special Study of the Securities Markets, a reprise of the 1963 Special Study. This is an excellent idea, given that RegNMS (adopted in 2005) has (as was inevitable) spawned many unintended and unexpected consequences. Revision of this regulation in light of experience is almost certainly warranted, and any such revision should be predicated on sound scholarship, lest it be merely a Trojan Horse for vested interests arguing their books.

I wrote about RegNMS in Regulation at the time of its adoption in a piece titled “The Thirty Years War” (an allusion to the fact that the establishment of the National Market System in 1975 had sparked a continuing clash over securities market structure). Overall, I think that piece stands up well, particularly my concluding paragraph:

Therefore, the proposed rules are not the final battle in a Thirty Years War. I fully expect that in 2075, some professor will write an article about the latest clash in an ongoing Hundred Years War over securities market structure regulation.

It is certainly the case that the controversies and conflicts over market structure have continued unabated since 2005, and show no signs of letting up. (Cf. Flash Boys.) Chairman Piwowar’s call for a new Special Study is testament to that.

More specifically, the major prediction of my article has been fully borne out. I predicted that the Order Protection Rule in particular would break the network effect that resulted in the dominance of the NYSE in the securities it listed. Since RegNMS was passed, the highly concentrated listed stock market (where virtually all price discovering transactions in NYSE stocks occurred on the NYSE) has been utterly transformed, with four exchanges now splitting most of the business, with no exchange doing more than a quarter of the volume.

I further predicted that this would result in the disintermediation of traditional intermediaries–like specialists–and the substantial erosion of economic rents. This too has happened. This is best illustrated by the trajectory of Goldman’s investment in specialist firm Spear, Leeds & Kellogg. Goldman paid $5.4 billion for it in 2000 (before RegNMS) and sold it for a pittance–$30 million–in 2014. I didn’t foresee exactly the nature or identity of the new intermediaries–HFT–but I was broadly aware that there would be entry into market making, and that this would reduce trading costs and undermine incumbents with market power. Further, as I’ve written about recently, the new intermediaries don’t appear to be making rents in the new equilibrium.

The years since RegNMS have seen a dramatic decline in trading costs for investors, and it is likely the case that this decline is largely attributable to the increase in competition. Much of the controversy that has raged since 2005 relates to disputes over trading practices that were an inevitable consequence of the breaking of the NYSE near-monopoly–a process pejoratively referred to as “fragmentation.” In particular, multiple markets necessitate arbitrageurs, who effectively enforce the law of one price. The strategies and tactics arbitraguers use often appear unsavory, and strike many as unfair: arbitrageurs get something even though they appear to do nothing substantive. Moreover, arbitrage uses up real resources. That’s costly, and it would be nice if this could be avoided, but that’s unlikely ever to be so. The trade-off between much greater competition (and reduced welfare losses due to the exercise of market power) and the expenditure of real resources to enforce the law of one price seems to be a great bargain.

Much of the criticism of RegNMS relates to the Order Protection Rule, which requires that no order can be executed on market X if a better price is displayed at market Y. The critics (e.g., the Principal Traders Association which ironically represents some of the biggest beneficiaries of RegNMS) argue that this rule (a) has led to a proliferation of order types intended to ensure compliance with the rule, which make the market far more complex, and (b) requires traders to maintain connections with and monitor all trading venues displaying quotes, no matter how small.

These complaints have some merit. The crucial question is whether the equity trading marketplace will be as competitive without the Order Handling Rule as it is with it. This is an open question, and one which should be the focus of the SEC’s inquiry. For if the Order Handling Rule is a necessary condition for robust competition, the costs that the PTA and others identify are likely well worth paying in order to realize the benefits of competition.

My prediction that competition would intensify post-RegNMS was based on my analysis of the effects of the Order Handling Rule, which was in turn based on my work on liquidity network effects done in the late-90s and early-00s. Specifically, in the formal models I derived (e.g., here), the self-reinforcing liquidity effect obtains when investors decide which trading venue to submit an order to on the basis of expected execution cost (i.e., bid-ask spread, price impact). The market with the bigger fraction of trading activity typically offers the lowest execution cost. Therefore, traders submit their orders to the bigger market. This creates a self-reinforcing feedback loop (and a self-fulling prophecy) in which trading activity “tips” to a single exchange. (There are some complexities here, relating to cream skimming of uninformed order flow. See the linked paper for a discussion of that issue.)

Mandating something akin to to the order handling rule forces order flow to the market offering the best price at a particular moment, not the one that offers the best price in expectation. As I phrased it in my Regulation paper, such a rule “socializes order flow”: even if an order is directed to a particular exchange, that exchange does not control that order flow and must direct to any other exchange offering a better price.

I think that both theory and the post-RegNMS experience show that the Order Handling Rule is sufficient to break the liquidity network effect because it socializes order flow. But is it necessary? Maybe not, but it is important to try to find out before jettisoning it.

Here’s a story which suggests that the rule is not necessary in the modern electronic trading environment. One reason why traders may choose to submit orders to where they expect to get the best execution is because of search costs. In a floor-based environment in particular, it is costly to verify which market is offering the best price at any time.  Moreover, since it takes time get quotes from two floor-based markets, by the time that you actually submit your order to the one giving the best quote, the market will have moved and you won’t get the price you thought you were going to get. So economize on search costs and the risks associated with delay by submitting the order to the market that usually offers the best price. Ironically, the inevitable result of this process is that there is only one market left standing.

Search is cheaper and faster–and arguably far cheaper and far faster–in the modern electronic environment. Based on feeds from multiple markets, an electronic trader (and in particular an automated trader) can rapidly compare quotes and send an order to the market offering the best quote, or by viewing depth (something pretty much impossible in the floor days, where much of the liquidity was in the hands of floor brokers) split an order among multiple venues to tap the liquidity in all of them.

In other words, the natural monopoly problem was far more likely in a floor-based environment where pre-trade transparency was so limited that search costs were very high: it was nigh on impossible to know precisely what trading opportunities were or to move fast enough to exploit the one that appeared best at any point in time, so traders submitted their orders to where they expected the opportunities to be the best. In contrast, electronification and automation have created such great pre-trade transparency and the ability to act on it that it is plausibly true that in this environment traders can and will submit their orders to whatever venue is offering the best trading opportunity at a point in time, regardless of whether it usually does so. In this story, technology eliminates the uncertainty and guesswork that created the liquidity network effect.

Maybe. Perhaps even likely. But I can’t be certain. Note that one complaint about the existing market structure is that even though everything has vastly speeded up, some traders are still faster than others. As a result, those who submit a market order in response to seeing a particular displayed price are often dismayed to learn that the market has moved before their order actually reaches the trading venue, and that their order is executed at a worse price than they had anticipated. Freed of the obligations of the Order Handling Rule, these traders may choose to submit their order to where they usually get the best price: if enough do this, the liquidity network effect will reemerge.

Further, the PTA and others have complained that it is costly to monitor and maintain connections with all trading venues as is necessary under the Order Handling Rule. If the Rule is relaxed or eliminated, one would expect that they will disconnect from some venues. If enough do this, the smaller venues will become unviable. After this happens, there will be fewer venues–and some traders may choose to disconnect from the smallest remaining one. This dynamic could result in another feedback loop that results in the survival of a single dominant exchange that exercises market power.

It is therefore not clear to me that elimination of the Order Handling Rule will result in traders having their cake (intense inter-exchange competition) and eating it too (less complexity, lower connection cost). Given the substantial benefits of greater competition that have been realized in the past dozen years, changes to the cornerstone of RegNMS should not be taken lightly. The Special Study, and the SEC, should pay close attention to how competition will evolve if the Order Handling Rule is eliminated. This analysis should take into account the existing technology, but also try to think of how technology will change in the aftermath of an elimination and how this technological change will affect competition.

Most importantly, any analysis must be predicated on an understanding that there are strong centripetal forces in securities trading. Any time traders have an incentive to direct order flow to the venue that is expected to offer the best price, the likely outcome is that only one venue will survive. The incentives of traders in a high speed, largely automated, and electronic market in the absence of an Order Handling Rule need to be considered carefully. It should not be assumed that technology alone will eliminate the incentive to direct orders to the market that is usually best, not the one that is best at any particular instant. This hypothesis should be probed vigorously and skeptically.

Experience in futures markets suggests that liquidity network effects can persist even in high speed, automated, electronic markets: futures contracts in a particular instrument exhibit a strong natural monopoly tendency, and strong tendencies towards tipping. It is arguable that the vertical integration of clearing, and the resulting non-fungibility of otherwise identical contracts traded on different venues, could contribute to this (though I am skeptical about that). But it could also mean that something like the Order Handling Rule (which is not present in futures markets) is necessary to create strong competition between multiple venues even in a highly computerized and automated trading environment.

This is the big issue in any revamping of RegNMS. It should be front and center of any analysis, including in the impending Special Study. The intense competition in the post-RegNMS world is a remarkable achievement, particularly in comparison with the near monopolistic market structure that existed before 2005. It would be a great shame if this were thrown away due to an incomplete analysis of what competition in a modern computerized market would be like in the absence of something like the Order Handing Rule.

Print Friendly

April 14, 2017

SWP Climbs The Hill

Filed under: Derivatives,Economics,Exchanges,Regulation — The Professor @ 10:40 am

I have become a regular contributor to The Hill. My inaugural column on the regulation of spoofing is here. The argument in a nutshell is that: (a) spoofing involves large numbers of cancellations, but so do legitimate market making strategies, so there is a risk that aggressive policing of spoofing will wrongly penalize market makers, thereby raising the costs of supplying liquidity; (b) the price impacts of spoofing are very, very small, and transitory; (c) enforcement authorities sometimes fail to pursue manipulations that have far larger price impacts; therefore (d) a focus on spoofing is a misdirection of scarce enforcement resources.

My contributions will focus on finance and regulatory issues. So those looking for my trenchant political commentary will have to keep coming here 😉

Click early! Click often!

Print Friendly

April 4, 2017

The Unintended Consequences of Blockchain Are Not Unpredictable: Respond Now Rather Than Repent Later*

Filed under: Clearing,Commodities,Derivatives,Economics,Regulation — The Professor @ 3:39 pm

In the past week the WSJ and the FT have run articles about a new bank-led initiative to move commodity trading onto a blockchain. In many ways, this makes great sense. By its nature, the process of recording and trading commodity trades and shipments is (a) collectively involve large numbers of spatially dispersed counterparties, (b) have myriad terms, and (c) can give rise to costly disputes. As a result of these factors, the process is currently very labor intensive, fraught with operational risk (e.g., inadvertent errors) and vulnerable to fraud (cf., the Qingdao metals warehouse scandal of 2014). In theory, blockchain has the ability to reduce costs, errors, and fraud. Thus, it is understandable that traders and banks are quite keen on the potential of blockchain to reduce costs and perhaps even revolutionize the trading business.

But before you get too excited, a remark by my friend Christophe Salmon at Trafigura is latent with deep implications that should lead you to take pause and consider the likely consequences of widespread adoption of blockchain:

Christophe Salmon, Trafigura’s chief financial officer, said there would need to be widespread adoption by major oil traders and refiners to make blockchain in commodity trading viable in the long term.

This seemingly commonsense and innocuous remark is actually laden with implications of unintended consequences that should be recognized and considered now, before the blockchain train gets too far down the track.

In essence, Christophe’s remark means that to be viable blockchain has to scale. If it doesn’t scale, it won’t reduce cost. But if it does scale, a blockchain for a particular application is likely to be a natural monopoly, or at most a natural duopoly. (Issues of scope economies are also potentially relevant, but I’ll defer discussion of that for now.)

Indeed, if there are no technical impediments to scaling (which in itself is an open question–note the block size debate in Bitcoin), the “widespread adoption” feature that Christophe identifies as essential means that network effects create scale economies that are likely to result in the dominance of a single platform. Traders will want to record their business on the blockchain that their counterparties use. Since many trade with many, this creates a centripetal force that will tend to draw everyone to a single blockchain.

I can hear you say: “Well, if there is a public blockchain, that happens automatically because everyone has access to it.” But the nature of public blockchain means that it faces extreme obstacles that make it wildly impractical for commercial adoption on the scale being considered not just in commodity markets, but in virtually every aspect of the financial markets. Commercial blockchains will be centrally governed, limited access, private systems rather than a radically decentralized, open access, commons.

The “forking problem” alone is a difficulty. As demonstrated by Bitcoin in 2013 and Ethereum in 2016, public blockchains based on open source are vulnerable to “forking,” whereby uncoordinated changes in the software (inevitable in an open source system that lacks central governance and coordination) result in the simultaneous existence of multiple, parallel blockchains. Such forking would destroy the network economy/scale effects that make the idea of a single database attractive to commercial participants.

Prevention of forking requires central governance to coordinate changes in the code–something that offends the anarcho-libertarian spirits who view blockchain as a totally decentralized mechanism.

Other aspects of the pure version of an open, public blockchain make it inappropriate for most financial and commercial applications. For instance, public blockchain is touted because it does not require trust in the reputation of large entities such as clearing networks or exchanges. But the ability to operate without trust does not come for free.

Trust and reputation are indeed costly: as Becker and Stigler first noted decades ago, and others have formalized since, reputation is a bonding mechanism that requires the trusted entity to incur sunk costs that would be lost if it violates trust. (Alternatively, the trusted entity has to have market power–which is costly–that generates a stream of rents that is lost when trust is violated. That is, to secure trust prices have to be higher and output lower than would be necessary in a zero transactions cost world.)

But public blockchains have not been able to eliminate trust without cost. In Bitcoin, trust is replaced with “proof of work.” Well, work means cost. The blockchain mining industry consumes vast amounts of electricity and computing power in order to prove work. It is highly likely that the cost of creating trusted entities is lower than the cost of proof of work or alternative ways of eliminating the need for trust. Thus, a (natural monopoly) commercial blockchain is likely to have to be a trusted centralized institution, rather than a decentralized anarchist’s wet-dream.

Blockchain is also touted as permitting “smart contracts,” which automatically execute certain actions when certain pre-defined (and coded) contingencies are met. But “smart contracts” is not a synonym for “complete contracts,” i.e., contracts where every possible contingency is anticipated, and each party’s actions under each contingency is specified. Thus, even with smart (but incomplete) contracts, there will inevitably arise unanticipated contingencies.

Parties will have to negotiate what to do under these contingencies. Given that this will usually be a bilateral bargaining situation under asymmetric information, the bargaining will be costly and sometimes negotiations will break down. Moreover, under some contingencies the smart contracts will automatically execute actions that the parties do not expect and would like to change: here, self-execution prevents such contractual revisions, or at least makes them very difficult.

Indeed, it may be the execution of the contractual feature that first makes the parties aware that something has gone horribly wrong. Here another touted feature of pure blockchain–immutability–can become a problem. The revelation of information ex post may lead market participants to desire to change the terms of their contract. Can’t do that if the contracts are immutable.

Paper and ink contracts are inherently incomplete too, and this is why there are centralized mechanisms to address incompleteness. These include courts, but also, historically, bodies like stock or commodity exchanges, or merchants’ associations (in diamonds, for instance) have helped adjudicate disputes and to re-do deals that turn out to be inefficient ex post. The existence of institutions to facilitate the efficient adaption of parties to contractual incompleteness demonstrates that in the real world, man does not live (or transact) by contract alone.

Thus, the benefits of a mechanism for adjudicating and responding to contractual incompleteness create another reason for a centralized authority for blockchain, even–or especially–blockchains with smart contracts.

Further, the blockchain (especially with smart contracts) will be a complex interconnected system, in the technical sense of the term. There will be myriad possible interactions between individual transactions recorded on the system, and these interactions can lead to highly undesirable, and entirely unpredictable, outcomes. A centralized authority can greatly facilitate the response to such crises. (Indeed, years ago I posited this as one of the reasons for integration of exchanges and clearinghouses.)

And the connections are not only within a particular blockchain. There will be connections between blockchains, and between a blockchain and other parts of the financial system. Consider for example smart contracts that in a particular contingency dictate large cash flows (e.g., margin calls) from one group of participants to another. This will lead to a liquidity shock that will affect banks, funding markets, and liquidity supply mechanisms more broadly. Since the shock can be destabilizing and lead to actions that are individually rational but systemically destructive if uncoordinated, central coordination can improve efficiency and reduce the likelihood of a systemic crisis. That’s not possible with a radically decentralized blockchain.

I could go on, but you get the point: there are several compelling reasons for centralized governance of a commercial blockchain like that envisioned for commodity trading. Indeed, many of the features that attract blockchain devotees are bugs–and extremely nasty ones–in commercial applications, especially if adopted at large scale as is being contemplated. As one individual who works on commercializing blockchain told me: “Commercial applications of blockchain will strip out all of the features that the anarchists love about it.”

So step back for a minute. Christophe’s point about “widespread adoption” and an understanding of the network economies inherent in the financial and commercial applications of blockchain means that it is likely to be a natural monopoly in a particular application (e.g., physical oil trading) and likely across applications due to economies of scope (which plausibly exist because major market participants will transact in multiple segments, and because of the ability to use common coding across different applications, to name just two factors). Second, a totally decentralized, open access, public blockchain has numerous disadvantages in large-scale commercial applications: central governance creates value.

Therefore, commercial blockchains will be “permissioned” in the lingo of the business. That is, unlike public blockchain, entry will be limited to privileged members and their customers. Moreover, the privileged members will govern and control the centralized entity. It will be a private club, not a public commons. (And note that even the Bitcoin blockchain is not ungoverned. Everyone is equal, but the big miners–and there are now a relatively small number of big miners–are more equal than others. The Iron Law of Oligarchy applies in blockchain too.)

Now add another factor: the natural monopoly blockchain will likely not be contestible, for reasons very similar to the ones I have written about for years to demonstrate why futures and equity exchanges are typically natural monopolies that earn large rents because they are largely immune from competitive entry. Once a particular blockchain gets critical mass, there will be the lock-in problem from hell: a coordinated movement of a large set of users from the incumbent to a competitor will be necessary for the entrant to achieve the scale necessary to compete. This is difficult, if not impossible to arrange. Three Finger Brown could count the number of times that has happened in futures trading on his bad hand.

Now do you understand why banks are so keen on the blockchain? Yes, they couch it in terms of improving transactional efficiency, and it does that. But it also presents the opportunity to create monopoly financial market infrastructures that are immune from competitive entry. The past 50 years have seen an erosion of bank dominance–“disintermediation”–that has also eroded their rents. Blockchain gives the empire a chance to strike back. A coalition of banks (and note that most blockchain initiatives are driven by a bank-led cooperative, sometimes in partnership with a technology provider or providers) can form a blockchain for a particular application or applications, exploit the centripetal force arising from network effects, and gain a natural monopoly largely immune from competitive entry. Great work if you can get it. And believe me, the banks are trying. Very hard.

Left to develop on its own, therefore, the blockchain ecosystem will evolve to look like the exchange ecosystem of the 19th or early-20th centuries. Monopoly coalitions of intermediaries–“clubs” or “cartels”–offering transactional services, with member governance, and with the members reaping economic rents.

Right now regulators are focused on the technology, and (like many others) seem to be smitten with the potential of the technology to reduce certain costs and risks. They really need to look ahead and consider the market structure implications of that technology. Just as the natural monopoly nature of exchanges eventually led to intense disputes over the distribution of the benefits that they created, which in turn led to regulation (after bitter political battles), the fundamental economics of blockchain are likely to result in similar conflicts.

The law and regulation of blockchain is likely to be complicated and controversial precisely because natural monopoly regulation is inherently complicated and controversial. The yin and yang of financial infrastructure in particular is that the technology likely makes monopoly efficient, but also creates the potential for the exercise of market power (and, I might add, the exercise of political power to support and sustain market power, and to influence the distribution of rents that result from that market power). Better to think about those things now when things are still developing, than when the monopolies are developed, operating, and entrenched–and can influence the political and regulatory process, as monopolies are wont to do.

The digital economy is driven by network effects: think Google, Facebook, Amazon, and even Twitter. In addition to creating new efficiencies, these dominant platforms create serious challenges for competition, as scholars like Ariel Ezrachi and Maurice Stucke have shown:

Peter Thiel, the successful venture capitalist, famously noted that ‘Competition Is for Losers.’ That useful phrase captures the essence of many technology markets. Markets in which the winner of the competitive process is able to cement its position and protect it. Using data-driven network effects, it can undermine new entry attempts. Using deep pockets and the nowcasting radar, the dominant firm can purchase disruptive innovators.

Our new economy enables the winners to capture much more of the welfare. They are able to affect downstream competition as well as upstream providers. Often, they can do so with limited resistance from governmental agencies, as power in the online economy is not always easily captured using traditional competition analysis. Digital personal assistants, as we explore, have the potential to strengthen the winner’s gatekeeper power.

Blockchain will do the exact same thing.

You’ve been warned.

*My understanding of these issues has benefited greatly from many conversations over the past year with Izabella Kaminska, who saw through the hype well before pretty much anyone. Any errors herein are of course mine.

Print Friendly

March 27, 2017

Seeing the OTC Derivatives Markets (and the Financial Markets) Like a State

Filed under: Clearing,Derivatives,Economics,Regulation — The Professor @ 12:07 pm

In the years since the financial crisis, and in particular the period preceding and immediately following the passage of Frankendodd, I can’t tell you how many times I saw diagrams that looked like this:

YellenCCPDiagram_1

YellenCCPDiagram_2

The top diagram is a schematic representation of an OTC derivatives market, with a tangle of bilateral connections between counterparties. The second is a picture of a hub-and-spoke trading network with a CCP serving as the hub. (These particular versions of this comparison are from a 2013 Janet Yellen speech.)

These diagrams came to mind when re-reading James Scott’s Seeing Like a State and his Two Cheers for Anarchism. Scott argues that states have an obsession with making the societies they rule over “legible” in order to make them easier to tax, regulate, and control. States are confounded by evolved complexity and emergent orders: such systems are difficult to comprehend, and what cannot be comprehended cannot be easily ruled. So states attempt to impose schemes to simplify such complex orders. Examples that Scott gives include standardization of language and suppression of dialects; standardization of land tenure, measurements, and property rights; cadastral censuses; population censuses; the imposition of familial names; and urban renewal (e.g., Hausmann’s/Napoleon III’s massive reconstruction of Paris). These things make a populace easier to tax, conscript, and control.

Complex realities of emergent orders are too difficult to map. So states conceive of a mental map that is legible to them, and then impose rules on society to force it to conform with this mental map.

Looking back at the debate over OTC markets generally, and clearing, centralized execution, and trade reporting in particular, it is clear that legislators and regulators (including central banks) found these markets to be illegible. Figures like the first one–which are themselves a greatly simplified representation of OTC reality–were bewildering and disturbing to them. The second figure was much more comprehensible, and much more comforting: not just because they could comprehend it better, but because it gave them the sense that they could impose an order that would be easier to monitor and control. The emergent order was frightening in its wildness: the sense of imposing order and control was deeply comforting.

But as Scott notes, attempts to impose control on emergent orders (which in Scott’s books include both social and natural orders, e.g., forests) themselves carry great risks because although hard to comprehend, these orders evolved the way they did for a reason, and the parts interact in poorly understood–and sometimes completely not understood–ways. Attempts to make reality fit a simple mental map can cause the system to react in unpredicted and unpredictable ways, many of which are perverse.

My criticism of the attempts to “reform” OTC markets was largely predicated on my view that the regulators’ simple mental maps did great violence to complex reality. Even though these “reform” efforts were framed as ways of reducing systemic risk, they were fatally flawed because they were profoundly unsystemic in their understanding of the financial system. My critique focused specifically on the confident assertions based on the diagrams presented above. By focusing only on the OTC derivatives market, and ignoring the myriad connections of this market to other parts of the financial market, regulators could not have possibly comprehended the systemic implications of what they were doing. Indeed, even the portrayal of the OTC market alone was comically simplistic. The fallacy of composition played a role here too: the regulators thought they could reform the system piece-by-piece, without thinking seriously about how these pieces interacted in non-linear ways.

The regulators were guilty of the hubris illustrated beautifully by the parable of Chesterton’s Fence:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

In other words, the regulators should have understood the system and why it evolved the way that it did before leaping in to “reform” it. As Chesterton says, such attempts at reformation quite frequently result in deformation.

Somewhat belatedly, there are efforts underway to map the financial system more accurately. The work of Richard Bookstaber and various colleagues under the auspices of the Office of Financial Research to create multilayer maps of the financial system is certainly a vast improvement on the childish stick figure depictions of Janet Yellen, Gary Gensler, Timmy Geithner, Chris Dodd, Barney Frank et al. But even these more sophisticated maps are extreme abstractions, not least because they cannot capture incentives, the distribution of information among myriad market participants, and the motivations and behaviors of these participants. Think of embedding these maps in the most complicated extensive form large-N player game you can imagine, and you might have some inkling of how inadequate any schematic representation of the financial system is likely to be. When you combine this with the fact that in complex systems, even slight changes in initial conditions can result in completely different outcomes, the futility of “seeing like a state” in this context becomes apparent. The map of initial conditions is inevitably crude, making it an unreliable guide to understanding the system’s future behavior.

In my view, Scott goes too far. There is no doubt that some state-driven standardization has dramatically reduced transactions costs and opened up new possibilities for wealth-enhancing exchanges (at some cost, yes, but these costs are almost certainly less than the benefit), but Scott looks askance at virtually all such interventions. Thus, I do not exclude the possibility of true reform. But Scott’s warning about the dangers of forcing complex emergent orders to conform to simplified, “legible”, mental constructs must be taken seriously, and should inform any attempt to intervene in something like the financial system. Alas, this did not happen when legislators and regulators embarked on their crusade to reorganize wholesale the world financial system. It is frightening indeed to contemplate that this crusade was guided by such crude mental maps such as those supposedly illustrating the virtues of moving from an emergent bilateral OTC market to a tamed hub-and-spoke cleared one.

PS. I was very disappointed by this presentation by James Scott. He comes off as a doctrinaire leftist anthropologist (but I repeat myself), which is definitely not the case in his books. Indeed, the juxtaposition of Chesterton and Scott shows how deeply conservative Scott is (in the literal sense of the word).

Print Friendly

March 24, 2017

Creative Destruction and Industry Life Cycles, HFT Edition

Filed under: Derivatives,Economics,Exchanges,Regulation — The Professor @ 11:56 am

No worries, folks: I’m not dead! Just a little hiatus while in Geneva for my annual teaching gig at Université de Genève, followed by a side trip for a seminar (to be released as a webinar) at ESSEC. The world didn’t collapse without my close attention, but at times it looked like a close run thing. But then again, I was restricted to watching CNN so my perception may be a little bit warped. Well, not a little bit: I have to say that I knew CNN was bad, but I didn’t know how bad until I watched a bit while on the road. Appalling doesn’t even come close to describing it. Strident, tendentious, unrelentingly biased, snide. I switched over to RT to get more reasonable coverage. Yes. It was that bad.

There are so many allegations regarding surveillance swirling about that only fools would rush in to comment on that now. I’ll be an angel for once in the hope that some actual verifiable facts come out.

So for my return, I’ll just comment on a set of HFT-related stories that came out during my trip. One is Alex Osipovich’s story on HFT traders falling on hard times. Another is that Virtu is bidding for KCG. A third one is that Quantlabs (a Houston outfit) is buying one-time HFT high flyer Teza. And finally, one that pre-dates my trip, but fits the theme: Thomas Peterffy’s Interactive Brokers Group is exiting options market making.

Alex’s story repeats Tabb Group data documenting a roughly 85 percent drop in HFT revenues in US equity trading. The Virtu-KCG proposed tie-up and the Quantlabs-Teza consummated one are indications of consolidation that is typical of maturing industries, and a shift it the business model of these firms. The Quantlabs-Teza story is particularly interesting. It suggests that it is no longer possible (or at least remunerative) to get a competitive edge via speed alone. Instead, the focus is shifting to extracting information from the vast flow of data generated in modern markets. Speed will matter here–he who analyzes faster, all else equal, will have an edge. But the margin for innovation will shift from hardware to data analytics software (presumably paired with specialized hardware optimized to use it).

None of these developments is surprising. They are part of the natural life cycle of a new industry. Indeed, I discussed this over two years ago:

In fact, HFT has followed the trajectory of any technological innovation in a highly competitive environment. At its inception, it was a dramatically innovative way of performing longstanding functions undertaken by intermediaries in financial markets: market making and arbitrage. It did so much more efficiently than incumbents did, and so rapidly it displaced the old-style intermediaries. During this transitional period, the first-movers earned supernormal profits because of cost and speed advantages over the old school intermediaries. HFT market share expanded dramatically, and the profits attracted expansion in the capital and capacity of the first-movers, and the entry of new firms. And as day follows night, this entry of new HFT capacity and the intensification of competition dissipated these profits. This is basic economics in action.

. . . .

Whether it is by the entry of a new destructively creative technology, or the inexorable forces of entry and expansion in a technologically static setting, one expects profits earned by firms in one wave of creative destruction to decline.  That’s what we’re seeing in HFT.  It was definitely a disruptive technology that reaped substantial profits at the time of its introduction, but those profits are eroding.

That shouldn’t be a surprise.  But it no doubt is to many of those who have made apocalyptic predictions about the machines taking over the earth.  Or the markets, anyways.

Or, as Herb Stein famously said as a caution against extrapolating from current trends, “If something cannot go on forever, it will stop.” Those making dire predictions about HFT were largely extrapolating from the events of 2008-2010, and ignored the natural economic forces that constrain growth and dissipate profits. HFT is now a normal, competitive business earning normal, competitive profits.  And hopefully this reality will eventually sink in, and the hysteria surrounding HFT will fade away just as its profits did.

The rise and fall of Peterffy/Interactive illustrates Schumpeterian creative destruction in action. Interactive was part of a wave of innovation that displaced the floor. Now it can’t compete against HFT. And as the other articles show, HFT is in the maturation stage during which profits are competed away (ironically, a phenomenon that was central to Marx’s analysis, and which Schumpeter’s theory was specifically intended to address).

This reminds me of a set of conversations I had with a very prominent trader. In the 1990s he said he was glad to see that the markets were becoming computerized because he was “tired of being fucked by the floor.” About 10 years later, he lamented to me how he was being “fucked by HFT.” Now HFT is an industry earning “normal” profits (in the economics lexicon) due to intensifying competition and technological maturation: the fuckers are fucking each other now, I guess.

One interesting public policy issue in the Peterffy story is the role played by internalization of order flow in undermining the economics of Interactive: there is also an internalization angle to the Virtu-KCG story, because one reason for Virtu to buy KCG is to obtain the latter’s juicy retail order flow. I’ve been writing about this (and related) subjects for going on 20 years, and it’s complicated.

Internalization (and other trading in non-lit/exchange venues) reduces liquidity on exchanges, which raises trading costs there and reduces the informativeness of prices. Those factors are usually cited as criticism of off-exchange execution, but there are other considerations. Retail order flow (likely uninformed) gets executed more cheaply, as it should because it it less costly (due to the fact that it poses less of an adverse selection risk). (Who benefits from this cheaper execution is a matter of controversy.) Furthermore, as I pointed out in a 2002 Journal of Law, Economics and Organization paper, off-exchange venues provide competition for exchanges that often have market power (though this is less likely to be the case in post-RegNMS which made inter-exchange competition much more intense). Finally, some (and arguably a lot of) informed trading is rent seeking: by reducing the ability of informed traders to extract rents from uninformed traders, internalization (and dark markets) reduce the incentives to invest excessively in information collection (an incentive Hirshleifer the Elder noted in the 1970s).

Securities and derivatives market structure is fascinating, and it presents many interesting analytical challenges. But these markets, and the firms that operate in them, are not immune to the basic forces of innovation, imitation, and entry that economists have understood for a long time (but which too many have forgotten, alas). We are seeing those forces at work in real time, and the fates of firms like Interactive and Teza, and the HFT sector overall, are living illustrations.

 

Print Friendly

Next Page »

Powered by WordPress