Streetwise Professor

June 9, 2021

GiGi’s Back!: plus ça change, plus c’est la même chose

Filed under: Clearing,Economics,Exchanges,HFT,Regulation — cpirrong @ 2:45 pm

One of the few compensations I get from a Biden administration is that I have an opportunity to kick around Gary Gensler–“GiGi” to those in the know–again. Apparently feeling his way in his first few months as Chairman of the SEC, Gensler has been relatively quiet, but today he unburdened himself with deep thoughts about stock market structure. If you didn’t notice, “deep” was sarcasm. His opinions are actually trite and shallow, and betray a failure to ask penetrating questions. Plus ça change, plus c’est la même chose.

Not that he doesn’t have questions. About payment for order flow (“PFOF”) for instance:

Payment for order flow raises a number of important questions. Do broker-dealers have inherent conflicts of interest? If so, are customers getting best execution in the context of that conflict? Are broker-dealers incentivized to encourage customers to trade more frequently than is in those customers’ best interest?

But he misses the big question: why is payment for order flow such a big deal in the first place?

Relatedly, Gensler expresses concern about what traders do in the dark:

First, as evidenced in January, nearly half of the trading interest in the equity market either is in dark pools or is internalized by wholesalers. Dark pools and wholesalers are not reflected in the NBBO. Moreover, the NBBO is also only as good as the market itself. Thus, under the segmentation of the current market, nearly half of trading along with a significant portion of retail market orders happens away from the lit markets. I believe this may affect the width of the bid-ask spread.

Which begs the question: why is “nearly half of the trading interest in the equity market either is in dark pools or is internalized by wholesalers”?

Until you answer these big questions, studying the ancillary ones like his regarding PFOF an NBBO is a waste of time.

The economics are actually very straightforward. In competitive markets, customers who impose different costs on suppliers will pay different prices. This is “price discrimination” of a sort, but not price discrimination based on an exploitation of market power and differences in customer demand elasticities: it is price differentiation based on differences is cost.

Retail order flow is cheaper to intermediate than institutional order flow. Some institutional order flow is cheaper to intermediate than other such flows. Competitive pressures will find ways to ensure flows that are cheaper to intermediate pay lower prices. PFOF, dark pools, etc., are all means of segmenting order flow based on cost.

Trying to restrict cost-based price differences by banning or restricting certain practices will lead clever intermediaries to find other ways to differentiate based on cost. This has always been so, since time immemorial.

In essence, Gensler and many other critics of US market structure want to impose uniform pricing that doesn’t reflect cost differences. This would be, in essence, a massive scheme of cross subsidies. Ironically, the retail traders for whom Gensler exhibits such touching concern would actually be the losers here.

Cross subsidy schemes are inherently unstable. There are tremendous competitive pressures to circumvent them. As the history of virtually every regulated sector (e.g., transportation, communications) has demonstrated for decades, and even centuries.

From a positive political economy perspective, the appeal of such cross subsidy schemes to regulators is great. As Sam Peltzman pointed out in his amazing 1976 JLE piece “Toward a More General Theory of Regulation,” regulators systematically attempt to suppress cost-based price differences in order to redistribute rents to gain political support. The main impetus for deregulation is innovation that exploits gains from trade from circumventing cross subsidy schemes–deregulation in banking (Regulation Q) and telecoms are great examples of this.

So who would the beneficiaries of this cross-subsidization scheme be? Two major SEC constituencies–exchanges, and large institutional traders.

In other words, all this chin pulling about PFOF and dark markets is politics as usual. Furthermore, it is politics as usual in the cynical sense that the supposed beneficiaries of regulatory concern (retail traders) are the ones who will be shtupped.

Gensler also expressed dismay at the concentration in the PFOF market: yeah, he’s looking at you, Kenneth. Getting the frequency?

Although Gensler’s systemic risk concern might have some justification, he still fails to ask the foundational question: why is it concentrated? He doesn’t ask, so he doesn’t answer, instead saying: “Market concentration can deter healthy competition and limit innovation.”

Well, concentration can also be the result of healthy competition and innovation (h/t the great Harold Demsetz). Until we understand the existing concentration we can’t understand whether it’s a bug or feature, and hence what the appropriate policy response is.

Gensler implicitly analogizes say Citadel to Facebook or Google, which harvest customer data and can exploit network effects which drives concentration. The analogy seems very strained here. Retail order flow is cheap to service because it is uninformed. Citadel (or other purchasers of order flow) isn’t learning something about consumers that it can use to target ads at them or the like. The main thing it is learning is what sources of order flow are uninformed, and which are informed–so it can avoid paying to service the latter.

Again, before plunging ahead, it’s best to understand what are the potential agglomeration economies of servicing order flow.

Gensler returns to one of his favorite subjects–clearing–at the end of his talk. He advocates reducing settlement time from T+2: “I believe shortening the standard settlement cycle could reduce costs and risks in our markets.”

This is a conventional–and superficial–view that suggests that when it comes to clearing, Gensler is like the Bourbons: he’s learned nothing, and forgotten nothing.

As I wrote at the peak of the GameStop frenzy (which may repeat with AMC or some other meme stock), shortening the settlement cycle involves serious trade-offs. Moreover, it is by no means clear that it would reduce costs or reduce risks. The main impact would be to shift costs, and transform risks in ways that are not necessarily beneficial. Again, shortening the settlement cycle involves a substitution of liquidity risk for credit risk–just as central clearing does generally, a point which Gensler was clueless about in 2010 and is evidently equally clueless about a decade later.

So GiGi hasn’t really changed. He is sill offering nostrums based on superficial diagnoses. He fails to ask the most fundamental questions–the Chesterton’s Fence questions. That is, understand why things are they way they are before proposing to change them.

Print Friendly, PDF & Email

December 27, 2018

The Market Is Down! Round Up the Usual Suspects!

Filed under: Economics,Exchanges,HFT — cpirrong @ 7:38 pm

Every time there is a major market selloff–like now–there is a Casablanca-like rush to round up the usual suspects. Treasury Secretary Steven Mnuchin blamed the Volcker Rule and HFT. This WSJ article blames algos (including HFT), but throws the kitchen sink in for good measure.

Truth be told, virtually every major market drop is unexplained at the time, and even well after, which only spurs the search for villains and scapegoats. There was no obvious spark for the Crash of ’87, and in the years since, many suspects have been named but none have been convicted. The same is true of the Crash of ’29. Perhaps the best effort–interesting, but not definitive–is George Bittlingmayer’s attribution of Black Tuesday to an unexpected shift in antitrust policy under the Hoover administration. But that came 65 years after the event!

The most recent selloff is no exception. The WSJ article lists a variety of bearish developments, but any such exercise smacks of post hoc, ergo propter hoc “reasoning.” Further, the article quotes various people who claim that the price decline is difficult to square with fundamental economic data–welcome to the club! The same is true for 1987, 1929, and other major declines. Recall Paul Samuelson’s aphorism: the stock market predicted 10 out of the last 5 recessions.

Part of the difficulty is that stock prices depend on expected cash flows, and expected returns, both of which can vary due to factors that are difficult to observe in public data. Asset pricing economists have a lot of theories of time varying expected returns–hinging on theories of time varying risk premia–none of which have strong empirical support. Modest changes in risk premia/expected returns can cause big valuation changes. Recent conditions (political/geopolitical risk, monetary policy changes) plausibly have affected risk premia, but our ability to map these relationships is virtually nonexistent, so at best we can formulate largely untestable hypotheses.

And untestable hypotheses are effectively speculations and opinions, and like certain body parts, everybody has one.

Given these realities, most major asset price movements are difficult to explain.

I vividly remember in the aftermath of the 1987 Crash, when I was a PhD student at Chicago. Gene Fama distributed a Mandelbrot article to all PhD students. The article presented a simple model in which long periods of price increases are followed by crashes. As I recall, the essence of the model was that if good news was received today, it was likely that there would be good news tomorrow, but if good news was not received today, the likelihood of receiving good news tomorrow was lower. In essence, it is a regime switching model, and a switch in from a good news regime to a bad news regime leads to a big valuation change, due to the transition probabilities.

Fama’s point in distributing the article was to emphasize that discontinuous changes in prices are not inconsistent with a “rational” market. Seemingly small fundamental shifts can lead to big price changes.

Again, a hypothesis–and a virtually untestable one.

What about blaming algos, a la Mnuchin and the WSJ? Well, blaming HFT–directly, anyways–makes no sense. Yes, HFT is programmed to respond to market signals, but it is negative feedback by nature. It tends to be stabilizing, not de-stabilizing.

There may be an indirect connection: HFT liquidity supply can dry up when order flow becomes toxic, and the decline in liquidity makes prices more sensitive to order flow, leading to larger price movements. The Flash Crash is a classic example of this. But that’s not unique to HFT. It is inherent to market making, and HFT basically puts what is in a market maker’s (e.g., old-time floor trader’s) synapses into code. Market makers pulling back–or shutting down altogether–occurred long before markets went electronic, and before anybody even dreamed of HFT.

If liquidity has declined–and the WSJ points to some limited evidence on this point–it is likely a response to market conditions, rather than a cause thereof. It’s something that occurs in almost every period of elevated volatility. It’s more of an effect of some common cause than an independent exogenous cause.

Further, by virtually every measure, the increasing automation in markets has led to greater liquidity. Much of the bitching–including in some quotes in the WSJ article–emanates from traditional liquidity suppliers who have lost out to more efficient competitors. Believe me, if order flow had become more toxic, these guys would have pulled back too, and probably more severely than HFT has done.

What can exacerbate market movements is positive feedback trading strategies. Portfolio insurance during the 1987 Crash is a classic example. The WSJ article points at algorithmic momentum trading strategies, and indeed these are positive feedback in nature. But they are not unique to algos: meatware implemented momentum/trend following strategies long before they were embedded in software. Momentum trading is something else that long predates the rise of the machines.

Several quotes in the WSJ article made me laugh. One was: “’Human beings tend not to react this fast and violently.’” Really? Heard of Black Monday? Black Tuesday? Silver Thursday? Black Friday? I’m sure there’s a Color Wednesday to fill in the week, but none comes to mind. Regardless, the point remains: human beings reacted rapidly and violently long before trading machines were even dreamt of.

Here’s another: “Today, when the computers start buying, everyone buys; when they sell, everyone sells.”

This is called “not an equilibrium.”

The bottom line is that the stock market sometimes decline substantially, without any obvious cause. Indeed, the cause(s) of some of the biggest, fastest drops remain elusive decades after they occurred. This is true across virtually every institutional and technological trading environment, making it less likely that any particular selloff is uniquely attributable to a change in technology. Furthermore, large market moves in the absence of any decisive event or piece of news is not inconsistent with market “rationality”, or due to some behavioral anomaly (which is inherently human, by the way).

But humans crave explanations for phenomena like big movements in the stock market, and this demand calls forth supply. That the explanations are for the most part untestable and hence not scientific only means that there is little check on this supply. Anybody can offer an explanation, which likely cannot be proven wrong. So why not? But if you understand that mechanism, you should also understand that you shouldn’t pay much attention.

Print Friendly, PDF & Email

June 15, 2016

Where’s the CFTC’s Head AT?: Fools Rush in Where Angels Fear to Tread

Filed under: Commodities,Derivatives,Economics,Exchanges,Financial crisis,HFT,Regulation — The Professor @ 1:07 pm
The CFTC is currently considering Regulation AT (for Automated Trading). It is the Commission’s attempt to get a handle on HFT and algorithmic trading.

By far the most controversial aspect of the proposed regulation is the CFTC’s demand that algo traders provide the Commission with their source code. Given the sensitivity of this information, algo/HFT firms are understandably freaking out over this demand.

Those concerns are certainly legitimate. But what I want to ask is: what’s the point? What can the Commission actually accomplish?

The Commission argues that by reviewing source code, it can identify possible coding errors that could lead to “disruptive events” like the 2013 Knight Capital fiasco. Color me skeptical, for at least two reasons.

First, I seriously doubt that the CFTC can attract people with the coding skill necessary to track down errors in trading algorithms, or can devote the time necessary. Reviewing the code of others is a difficult task, usually harder than writing the code in the first place; the code involved here is very complex and changes frequently; and the CFTC is unlikely to be able devote the resources necessary for a truly effective review. Further, who has the stronger incentive? A firm that can be destroyed by a coding error, or some GS-something? (The prospect of numerous individuals perusing code creates the potential for a misappropriation of intellectual property which is what really has the industry exercised.) Not to mention that if you really have the chops to code trading algos, you’ll work for a prop shop or Citadel or Goldman or whomever and make much more than a government salary.

Second, and more substantively, reviewing individual trading algorithms in isolation is of limited value in determining their potentially disruptive effects. These individual algorithms are part of a complex system, in the technical/scientific meaning of the term. These individual pieces interact with one another, and create feedback mechanisms. Algo A takes inputs from market data that is produced in part by Algos B, C, D, E, etc. Based on these inputs, Algo A takes actions (e.g., enters or cancels orders), and Algos B, C, D, E, etc., react. Algo A reacts to those reactions, and on and on.

These feedbacks can be non-linear. Furthermore, the dimensionality of this problem is immense. Basically, an algo says if the state of the market is X, do Y. Evaluating algos in toto, the state of the market can include the current and past order books of every product, as well as the past order books (both explicitly as a condition in some algorithms, or implicitly through the empirical analysis that the developers use to find profitable trading rules based on historical market information), as well as market news. This state changes continuously.

Given this dimensionality and feedback-driven complexity, evaluating trading algorithms in isolation is a fools errand. Stability depends on how the algorithms interact. You cannot determine the stability of an emergent order, or its vulnerability to disruption, by looking at the individual components.

And since humans are still part of the trading ecosystem, how software interacts with meatware matters too. Fat finger problems are one example, but just normal human reactions to market developments can be destabilizing. This is true when all of the actors are human: it’s also true when some are human and some are algorithmic.

Look at the Flash Crash. Even in retrospect it has proven impossible to establish definitively the chain of events that precipitated it and caused it to unfold the way that it did. How is it possible to evaluate prospectively the stability of a system under a vastly larger set of possible states than those that existed on the day of the Flash Crash?

These considerations mean that  the CFTC–or any regulator–has little ability to improve system stability even if given access to the complete details of important parts of that system. But it’s potentially worse than that. Ill-advised changes to pieces of the system can make it less stable.

This is because in complex systems, attempts to improve the safety of individual components of the system can actually increase the probability of system failure.

In sum, markets are complex systems/emergent orders. The effects of changes to parts of these systems are highly unpredictable. Furthermore, it is difficult, and arguably impossible, to predict how changes to individual pieces of the system will affect the behavior of the system as a whole under all possible contingencies, especially given the vastness of the set of contingencies.

Based on this reality, we should be very chary about letting any regulator attempt to micromanage pieces of this complex system. Indeed, any regulator should be reluctant to undertake this task. But regulators frequently overestimate their competence, and financial regulators have proven time and again that they really don’t understand that they are dealing with a complex system/emergent order that does not respond to their interventions in the way that they intend. But fools rush in where angels fear to tread, and if the Commission persists in its efforts to become the Commissar of Code, it will be playing the fool–and it will not just be algo traders that pay the price.

Print Friendly, PDF & Email

October 23, 2015

Massad’s Recent Speech: Flashy, But Misleading, and Beside the Point

Filed under: Derivatives,Economics,HFT,Regulation — The Professor @ 8:53 pm
The other day CFTC Chairman Timothy Massad gave a speech about “flash events” in futures markets that has attracted a lot of attention. Most of the attention was given to Massad’s claim that there had been 35 flash events in WTI futures this year, and between 9 and 25 events per year combined in corn, crude, e-minis, 30 year Treasuries, gold, and the Euro from 2010-2014. Flashy results indeed. But the method for identifying them is misleading, and makes big flash moves seem more likely than they really are.

These results, and specifically the WTI finding for 2015, is an artifact of the definition of a flash event (which Massad acknowledged is somewhat arbitrary):

[E]pisodes in which the price of a contract moved at least 200 basis points within a trading hour— but returned to within 75 basis points of the original or starting price within that same hour.

The problem is that the number of flash events will depend on volatility.  Two percent moves are more likely in high volatility environments, or for high volatility contracts.

This is clearly what’s going on in oil. As this chart of the oil volatility index (OVX) shows, oil volatility was extremely low through most of 2014, but increased sharply in late-2014 through mid-2015, and then has picked up again in recent months:

Screen Shot 2015-10-23 at 7.45.39 PM

With volatility in the 60-70 percent annualized range, you will have a much greater likelihood of a 200 basis point move (and a subsequent 125 bp or so reversal) than with 15 percent vols. The flashy 2015 crude oil results are a reflection of this year’s high underlying volatility, which has been fundamentals driven, rather than the microstructure of modern electronic markets.

The 200/75 basis point standard was chosen because that’s what happened in the Treasury market on 15 October, 2014. But a 200 basis point move in something like Treasuries, which have a volatility of around 10 percent, is a bigger number of standard deviation move than a 200 basis point move in crude, especially with a volatility of 70. So the more appropriate cutoff would have been standard deviations (sigmas) rather than percent. But if Massad had done that, he would have identified a lot fewer events, and his speech would have been met with yawns, rather than the attention it has received.

Let’s also put things in perspective. The contracts considered trade 17-23 hours per day. 252 days a year times (say) 20 hour per day times 6 contracts and 20 events/year gives the odds of a .06 percent of an event in any hour. Using a more realistic sigma standard would reduce the odds of an event comparable to the Treasury flash event to a much smaller number than that.

Put differently, the Treasury event was truly anomalous, and Massad’s way of analyzing the data makes it seem more common than it really is. To get a flashy, eye-catching result, Massad had to use a misleading standard to identify flash events. Objects in his mirror are smaller than they appear.

The taking off point for Massad’s speech was the report on the 2014 Treasury flash crash. Like the infamous May, 2010 equity flash crash, there was a sharp decline in liquidity leading up to the price break. Massad attributes this to the way algorithms are programmed:

We also know that as with humans, the modern algorithms have risk management capabilities embedded within them. So when there is a moment of sudden, unexpected volatility, it may not be surprising that some in the market pull back – potentially faster than a human can.

The report describes how on October 15, some algos pulled back by widening their spreads and others reduced the size of their trading interest. Whether such dynamics can further increase volatility in an already volatile period is a question worth asking, but a difficult one to answer. It is also very difficult for individual institutions of any type to remain in the book, opposing price headwinds, or worse, to try and catch the proverbial falling knife. For many, this decision can be the difference between risk mitigation and significant losses. Contrary to what some have suggested,

This makes perfect sense. Some algorithms-especially HFT algorithms-attempt to determine when order flow is becoming toxic (and hence adverse selection risks are elevated) and reduce exposures when they do. Holding depth constant, greater information flow makes prices more volatile, and the reduction in liquidity that the greater information flow causes makes prices even more volatile.

This means that looking at the depth reductions and associated increases in volatility focuses on a symptom, not the underlying cause. What deserves more attention is what causes the increase in the informativeness of order flow that makes the liquidity suppliers cut back. This hasn’t been done in any study, to my knowledge, nor is it likely to be possible to do so.

And as Massad notes, this phenomenon is not unique to electronic markets. Meat puppet market makers also take a powder when adverse selection risks rise:

Contrary to what some have suggested, I suspect it was difficult for market makers in the pre-electronic era to routinely maintain tight and deep spreads during volatile conditions. They likely took long coffee breaks.*

It’s beyond suspicion, actually. It happens. Look at the Crash of ’87 when locals fled the pits and OTC market makers stopped answering their phones.

These reductions in liquidity are inherent in any trading environment where private information is important, and the rate of information flow varies.  Regardless of trading technology or market microstructure, liquidity suppliers will cut the sizes of their quotes, or stop quoting altogether, when order flow turns very toxic.

Given all this, Massad’s policy prescriptions are oddly disconnected from the flash phenomenon that prompted his talk:

The focus of our forthcoming proposals will be on the automation of order origination, transmission and execution – and the risks that may arise from such activity. These risks can come about due to malfunctioning algorithms, inadequate testing of algos, errors and similar problems. We are concerned about the potential for disruptive events and whether there are adequate measures to ensure effective compliance with risk controls and other requirements.

Now of course, you could have errors before, in the days of pit traders and specialists. You could have failures of systems in less sophisticated times. But generally the consequences were of lesser magnitude than what we may face today. And that’s in large part because the errors were easier to identify, arrest or cure before they caused widespread damage.

I expect that our proposals will include requirements for pre-trade risk controls and other measures with respect to automated trading. These will apply regardless of whether the automated trading is high or low frequency. We will not attempt to define high-frequency trading specifically. I expect that we will propose controls at the exchange level, and also at the clearing member and trading firm level.

 

That’s all great, but really beside the point. If rogue or fat-fingered algos were the problems in any of the alleged flash events Massad identified (including the Treasury event of a year ago), he would have been able to say so. But he admits that the causes of the various events are all unknown. So it’s a bait-and-switch to pose the problem of flash crashes, and then advance remedies that have nothing to do with them. It’s the regulatory equivalent of applying leeches.

In sum, Massad overstates the flash event problem, and offers policies that have nothing to do with them. The fact remains that these things are probably beyond a policy fix anyways. They inhere in nature of the trading of financial instruments when order flow can become toxic.

*Gillian Tett of the FT gets Massad’s point exactly backwards:

The crucial point is that these automated trading programs — like Hal — lack human judgment. When a crisis erupts and prices churn, computers do not simply “take a long coffee break”, as Mr Massad says, and wait for common sense to return; instead they tend to accelerate trading, fuelling those flash crash swings.

Sheesh. Please read, Gillian. Massad’s point is that the algos do take a metaphorical coffee break. They don’t speed up, they pull back.

 

 

Print Friendly, PDF & Email

October 10, 2015

Igor Gensler Helps the Wicked Witch of the West Wing Create Son of Frankendodd

Hillary Clinton has announced her program to reform Wall Street. Again.

The actual author of the plan is said to be my old buddy, GiGi: Gary Gensler.

Gensler, if you will recall, was the Igor to Dr. Frankendodd, the loyal assistant who did the hard work to bring the monster to life. Now he is teaming with the Wicked Witch of the West Wing to create Son of Frankendodd.

There are a few reasonable things in the proposal. A risk charge on bigger, more complex institutions makes sense, although the details are devilish.

But for the most part, it is ill-conceived, as one would expect from Gensler.

For instance, it proposes regulating haircuts on repo loans. As I said frequently in the 2009-2010 period, attempting to impose these sorts of requirements on heterogeneous transactions is a form of price control that will lead some risks to be underpriced and some risks to be overpriced. This will create distorted incentives that are likely to increase risks and misallocations, rather than reduce them.

A tax on HFT has received the most attention:

The growth of high-frequency trading (HFT) has unnecessarily burdened our markets and enabled unfair and abusive trading strategies that often capitalize on a “two-tiered” market structure with obsolete rules. That’s why Clinton would impose a tax targeted specifically at harmful HFT. In particular, the tax would hit HFT strategies involving excessive levels of order cancellations, which make our markets less stable and less fair.

This is completely wrongheaded. HFT has not “burdened” our markets. It has been a form of creative destruction that has made traditional intermediaries obsolete, and in so doing has dramatically reduced trading costs. Yes, a baroque market structure in equities has created opportunities for rent seeking by HFT firms, but that was created by regulations, RegNMS in particular. So why not fix the rules (which in Hillary and Gensler acknowledge are problematic) rather than kneecap those who are responding to the incentives the rules create?

Furthermore, the particular remedy proposed here is completely idiotic. “Excessive levels of order cancellations.” Just who is capable of determining what is “excessive”? Furthermore, the ability to cancel orders rapidly is exactly what allows HFT to supply liquidity cheaply, because it limits their vulnerability to adverse selection. High rates of order cancellation are a feature, not a bug, in market making.

It is particularly ironic that Hillary pitches this as a matter of protecting “everyday investors.” FFS, “everyday investors” trading in small quantities are the ones who have gained most from the HFT-caused narrowing of bid-ask spreads.

Hillary also targets dark pools, another target of popular ignorance. Dark pools reduce trading costs for institutional investors, many of whom are investing the money of “everyday” people.

The proposal also gives Gensler an opportunity to ride one of his hobby horses, the Swaps Pushout Rule. This is another inane idea that is completely at odds with its purported purpose. It breaks netting sets and if anything makes the financial system more complex, and certainly makes financial institutions more complex. It also discriminates against commodities and increases the costs of managing commodity price risk.

The most bizarre part of the proposal would require financial institutions to demonstrate to regulators that they can be managed effectively.

Require firms that are too large and too risky to be managed effectively to reorganize, downsize, or break apart. The complexity and scope of many of the largest financial institutions can create risks for our economy by increasing both the likelihood that firms will fail and the economic damage that such failures can cause.[xiv] That’s why, as President, Clinton would pursue legislation that enhances regulators’ authorities under Dodd-Frank to ensure that no financial institution is too large and too risky to manage. Large financial firms would need to demonstrate to regulators that they can be managed effectively, with appropriate accountability across all of their activities. If firms can’t be managed effectively, regulators would have the explicit statutory authorization to require that they reorganize, downsize, or break apart. And Clinton would appoint regulators who would use both these new authorities and the substantial authorities they already have to hold firms accountable.

Just how would you demonstrate this? What would be the criteria? Why should we believe that regulators have the knowledge or expertise to make these judgments?

I have a Modest Proposal of my own. How about a rule that requires legislators and regulators to demonstrate that they have the competence to manage entire sectors of the economy, and in particular, have the competence to understand, let alone manage, an extraordinarily complex emergent order like the financial system? If some firms are too complex to manage, isn’t an ecosystem consisting of many such firms interacting in highly non-linear ways exponentially more complex to control, especially through the cumbersome process of legislation and regulation? Shouldn’t regulators demonstrate they are up to the task?

But of course Gensler and his ilk believe that they are somehow superior to those who manage financial firms. They are oblivious to the Knowledge Problem, and can see the speck in every banker’s eye, but don’t notice the log in their own.

People like Gensler and Hillary, who are so hubristic to presume that they can design and regulate the complex financial system, are by far the biggest systemic risk. Frankendodd was bad enough, but Son of Frankendodd looks to be an even worse horror show, and is almost guaranteed to be so if Gensler is the one in charge, as he clearly aims to be.

Print Friendly, PDF & Email

July 15, 2015

The Joint Report on the Treasury Spike: Unanswered Questions, and You Can’t Stand in the Same River Twice

Filed under: Derivatives,Economics,HFT,Regulation — The Professor @ 11:39 am
The Treasury, Fed (Board of Governors and NYFed), SEC, and CFTC released a joint report on the short-lived spike in Treasury prices on 15 October, 2014. The report does a credible job laying out what happened, based on a deep dive into the high frequency data. But it does not answer the most interesting questions.

One thing of note, which shouldn’t really need mentioning, but does, is the report’s documentation of the diversity of algorithmic/high frequency trading carried out by what the report refers to as PTFs, or proprietary trading firms. This diversity is illustrated by the fact that these firms were both the largest passive suppliers of liquidity and the largest aggressive takers of liquidity during the October “event.” Indeed, the report documents the diversity within individual PTFs: there was considerable “self-trading,” whereby a particular PTF was on both sides of a trade. Meaning presumably that these PTFs had both aggressive and passive algos working simultaneously. So talking about “HFT” as some single, homogeneous thing is radically oversimplistic and misleading.

But let’s cut to the chase: Whodunnit? The report’s answer?: It’s complicated. The report says there was no single cause (e.g., a fat finger problem or whale trader).

This should not be surprising. In emergent orders, which financial markets are, large changes can occur in response to small (and indeed, very small) shocks: these systems can go non-linear. Complex feedbacks make attribution of cause impossible.  Although there is much chin-pulling (both in the report, and more generally) about the impact of technology and changes in market structure, the fundamental sources of feedback, and the types of participants in the ecosystem, are largely independent of technology.

Insofar as the events of 15 October are concerned, the report documents a substantial decline in market depth on both the futures market, and the main cash Treasury platforms (BrokerTec and eSpeed) in the hour following the release of the retail sales report. The decline in depth was due to PTFs reducing the size (but not the price) of their limit orders, and banks/dealers widening their quotes. Then, starting about 0930, there was a substantial order imbalance to the buy side on the futures: this initial order imbalance was driven primarily by banks/dealers. About 3 minutes later, aggressive PTFs kicked in on the buy side on both futures and the cash platforms.  Buying pressure peaked around 0939, and then both aggressive PTFs and the banks/dealers switched to the sell side. Prices rose when aggressors bought, and fell when they sold.

None of this is particularly surprising, but the report begs the most important questions. In particular, what caused the acute decline in depth in the hour leading up to the big price movement, and what triggered the surge in buy orders?

The first conjecture that comes to mind is related to informed trading and adverse selection. For some reason, PTFs (or more accurately, their algos) in particular apparently detected an increase in the toxicity of order flow, or observed some other information that implied that adverse selection risk was increasing, and they reduced their quote sizes to reduce the risk of being picked off.

Did order flow become more toxic in the roughly hour-long period following the release of the retail number? The report does not investigate that issue, which is unfortunate. Since liquidity declines were also marked in the minutes before the Flash Crash, it is imperative to have a better understanding of what drives these declines. There are metrics of toxicity (i.e., order flow informativeness). Liquidity suppliers (including HFT) monitor it in real time.  Understanding these events requires an analysis of whether variations in toxicity drive variations in liquidity, and in particular marked declines in depth.

Private information could also explain a surge in order imbalances. Those with private information would be the aggressors on the side of the net imbalance. In this case, the first indication of an imbalance is in the futures, and comes from the banks and asset managers. PTF net buying kicks in a few minutes later, suggesting they were extracting information from the banks’ and asset managers’ trading.

This raises the question: what was the private information, and what was the source of that information?

One problem with the asymmetric information story is the rapid reversal of the price movement. Informed trades have persistent effects. I’ve even seen in the data from some episodes that arguably manipulative (and hence uninformed) trades that could not be identified as such had persistent price impacts. So did new information arrive that led the buyers to start selling?

A potentially more problematic explanation of events (and I am just throwing out a hypothesis here) is that increased order flow toxicity due to informed trading eroded liquidity, and this created the conditions in which pernicious algorithms could thrive. For instance, momentum triggering (and momentum following) algorithms could have a bigger impact when the market lacks depth, as then smallish imbalances can move prices substantially, which then triggers trend following. When prices get sufficiently out of line, these algos might turn off or switch directions, or other contrarian algorithms might kick in.

These questions cannot be answered without knowing the algorithms, on both the passive and aggressive sides. What information did they have, and how did they react to it? Right now, we are just seeing their shadows. To understand the full chronology here–the decline in depth/liquidity, the surge in order imbalances from banks/dealers around 0930, the following surge in aggressive PTF buying, and the reversal in signed net order flow–it is necessary to understand in detail the entire algo ecosystem. We obviously don’t understand it, and likely never will.

Even if it was possible to go back and get a granular understanding of the algorithms and their interactions, this would be of limited utility going forward because the emergent ecosystem evolves continuously and rapidly. Indeed, no doubt the PTFs and banks carried out their own forensic analyses of the events of 15 October, and changed their algorithms accordingly. This means that even if we knew the  causal connections and feedbacks that produced the abrupt movement and reversal in Treasury prices, that knowledge will not really permit anticipation of future episodes, as the event itself will have changed the system, its connections, and its feedbacks. Further, independent of the effect of 15 October, the system will have evolved in the past 9 months. Given the dependence of the behavior of such systems on their very fine details, the system will behave differently today than it did then.

In sum, the joint report provides some useful information on what happened on 15 October, 2014, but it leaves the most important questions unanswered. What’s more, the answers regarding this one event would likely be only modestly informative going forward because that very event likely caused the system to change. Pace Heraclitus, when it comes to financial markets, “You cannot step twice into the same river; for other waters are continually flowing in.”

 

 

 

Print Friendly, PDF & Email

April 24, 2015

A Matter of Magnitudes: Making Matterhorn Out of a Molehill

Filed under: Derivatives,Economics,HFT,Politics,Regulation — The Professor @ 10:47 am
The CFTC released its civil complaint in the Sarao case yesterday, along with the affidavit of Cal-Berkeley’s Terrence Hendershott. Hendershott’s report makes for startling reading. Rather than supporting the lurid claims that Sarao’s actions had a large impact on E Mini prices, and indeed contributed to the Flash Crash, the very small price impacts that Hendershott quantifies undermine these claims.

In one analysis, Hendershott calculates the average return in a five second interval following the observation of an order book imbalance. (I have problems with this analysis because it aggregates all orders up to 10 price levels on each side of the book, rather than focusing on away-from-the market orders, but leave that aside for a moment.) For the biggest order imbalances-over 3000 contracts on the sell side, over 5000 on the buy side-the return impact is on the order of .06 basis points. Point zero six basis points. A basis point is one-one-hundredth of a percent, so we are talking about 6 ten-thousandths of one percent. On the day of the Flash Crash, the E Mini was trading around 1165. A .06 basis point return impact therefore translates into a price impact of .007, which is one-thirty-fifth of a tick. And that’s the biggest impact, mind you.

To put the comparison another way, during the Flash Crash, prices plunged about 9 percent, that is, 900 basis points. Hendershott’s biggest measured impact is therefore 4 orders of magnitude smaller than the size of the Crash.

This analysis does not take into account the overall cumulative impact of the entry of an away-from-the market order, nor does it account for the fact that orders can affect prices, prices can affect orders, and orders can affect orders. To address these issues, Hendershott carried out a vector autoregression (VAR) analysis. He estimates the cumulative impact of an order at levels 4-7 of the book, accounting for direct and indirect impacts, through an examination of the impulse response function (IRF) generated by the estimated VAR.* He estimates that the entry of a limit order to sell 1000 contracts at levels 4-7 “has a price impact of roughly .3 basis points.”

Point 3 basis points. Three one-thousandths of one percent. Given a price of 1165, this is a price impact of .035, or about one-seventh of a tick.

Note further that the DOJ, the CFTC, and Hendershott all state that Sarao see-sawed back and forth, turning the algorithm on and off, and that turning off the algorithm caused prices to rebound by approximately the same amount as turning it on caused prices to fall. So, as I conjectured originally, his activity-even based on the government’s theory and evidence-did not bias prices upwards or downwards systematically.

This is directly contrary to the consistent insinuation throughout the criminal and civil complaints that Sarao was driving down prices. For example, the criminal complaint states that during the period of time that Sarao was using the algorithm “the E-Mini price fell by 361 [price] basis points” (which corresponds to a negative return of about 31 basis points). This is two orders of magnitude bigger than the impact calculated based on Hendershott’s .3 return basis point estimate even assuming that the algorithm was working only one way during this interval.

Further, Sarao was buying and selling in about equal quantities. So based on the theory and evidence advanced by the government, Sarao was causing oscillations in the price of a magnitude of a fraction of a tick, even though the complaints repeatedly suggest his algorithm depressed prices. To the extent he made money, he was making it by trading large volumes and earning a small profit on each trade that he might have enhanced slightly by layering, not by having a big unidirectional impact on prices as the government alleges.

The small magnitudes are a big deal, given the way the complaints are written, in particular the insinuations that Sarao helped cause the Flash Crash. The magnitudes of market price movements dwarf the impacts that the CFTC’s own outside expert calculates. And the small magnitudes raise serious questions about the propriety of bringing such serious charges.

Hendershott repeatedly says his results are “statistically significant.” Maybe he should read Deirdre McCloskey’s evisceration of the Cult of Statistical Significance. It’s economic significance that matters, and his results are economically miniscule, compared to the impact alleged. Hendershott has a huge sample size, which can make even trivial economic impacts statistically significant. But it is the economic significance that is relevant. On this, Hendershott is completely silent.

The CFTC complaint has a section labeled “Example of the Layering Algorithm Causing an Artificial Price.” I read with interest, looking for, you know, actual evidence and stuff. There was none. Zero. Zip. There is no analysis of the market price at all. None! This is a piece of the other assertions of price artificiality, including most notably the effect of the activity on the Flash Crash: a series of conclusory statements either backed by no evidence, or evidence (in the form of the Hendershott affidavit) that demonstrates how laughable the assertions are.

CFTC enforcement routinely whines at the burdens it faces proving artificiality, causation and intent in a manipulation case. Here they have taken on a huge burden and are running a serious risk of getting hammered in court. I’ve already addressed the artificiality issue, so consider causation for a moment. If CFTC dares to try to prove that Sarao caused-or even contributed to-the Crash, it will face huge obstacles. Yes, as Chris Clearfield and James Weatherall rightly point out, financial markets are emergent, highly interconnected and tightly coupled. This creates non-linearities: small changes in initial conditions can lead to huge changes in the state of the system. A butterfly flapping its wings in the Amazon can cause a hurricane in the Gulf of Mexico: but tell me, exactly, which of the billions of butterflies in the Amazon caused a particular storm? And note, that it is the nature of these systems that changing the butterfly’s position slightly (or changing the position of other butterflies) can result in a completely different outcome (because such systems are highly sensitive to initial conditions). There were many actors in the markets on 6 May, 2010. Attributing the huge change in the system to the behavior of any one individual is clearly impossible. As a matter of theory, yes, it is possible that given the state of the system on 6 May that activity that Sarao undertook with no adverse consequences on myriad other days caused the market to crash on that particular day when it didn’t on other days: it is metaphysically impossible to prove it. The very nature of emergent orders makes it impossible to reverse engineer the cause out of the effect.

A few additional points.

I continue to be deeply disturbed by the “sample days” concept employed in the complaints and in Hendershott’s analysis. This smacks of cherry picking. Even if one uses a sample, it should be a random one. And yeah, right, it just so happened that the Flash Crash day and the two preceding days turned up in a random sample. Pure chance! This further feeds suspicions of cherry picking, and opportunistic and sensationalist cherry picking at that.

Further, Hendershott (in paragraph 22 of his affidavit) asserts that there was a statistically significant price decline after Sarao turned on the algorithm, and a statistically significant price increase when he turned it off. But he presents no numbers, whereas he does report impacts of non-Sarao-specific activity elsewhere in the affidavit. This is highly suspicious. Is he too embarrassed to report the magnitude? This is a major omission, because it is the impact of Sarao’s activity, not offering away from the market generally, that is at issue here.

Relatedly, why not run a VAR (and the associated IRF) using Sarao’s orders as one of the variables? After all, this is the variable of interest: what we want to know is how Sarao’s orders affected prices. Hendershott is implicitly imposing a restriction, namely, that Sarao’s orders have the same impact as other orders at the same level of the book. But that is testable.

Moreover, Hendershott’s concluding paragraph (paragraph 23) is incredibly weak, and smacks of post hoc, ergo propter hoc reasoning. He insinuates that Sarao contributed to the Crash, but oddly distances himself from responsibility for the claim, throwing it on regulators instead: “The layering algorithm contributed to the overall Order Book imbalances and market conditions that the regulators say led to the liquidity deterioration prior to the Flash Crash.” Uhm, Terrence, you are the expert here: it is incumbent on you to demonstrate that connection, using rigorous empirical methods.

In sum, the criminal and civil complaints make a Matterhorn out of a molehill, and a small molehill at that. And don’t take my word for it: take the “[declaration] under penalty of perjury” of the CFTC’s expert. This is a matter of magnitudes, and magnitudes matter. The CFTC’s own expert estimates very small impacts, and impacts that oscillate up and down with the activation and de-activation of the algorithm.

Yes, Sarao’s conduct was dodgy, clearly, and there is a colorable case that he did engage in spoofing and layering. But the disparity between the impact of his conduct as estimated by the government’s own expert and the legal consequences that could arise from his prosecution is so huge as to be outrageous.

Particularly so since over the years CFTC has responded to acts that have caused huge price distortions, and inflicted losses in nine and ten figures, with all of the situational awareness of Helen Keller. It is as if the enforcers see the world through a fun house mirror that grotesquely magnifies some things, and microscopically shrinks others.

In proceeding as they have, DOJ and the CFTC have set off a feeding frenzy that could have huge regulatory and political impacts that affect the exchanges, the markets, and all market participants. CFTC’s new anti-manipulation authority permits it to sanction reckless conduct. If it was held to that standard, the Sarao prosecution would earn it a long stretch of hard time.

*Hendershott’s affidavit says that Exhibit 4 reports the IRF analysis, but it does not.

 

Print Friendly, PDF & Email

April 22, 2015

Spoofing: Scalping Steroids?

Filed under: Derivatives,Economics,Exchanges,HFT,Regulation — The Professor @ 5:35 pm
The complaint against Sarao contains some interesting details. In particular, it reports his profits and quantities traded for nine days.

First, quantities bought and sold are almost always equal. That is characteristic of a scalper.

Second, for six of the days, he earned an average of .63 ticks per round turn. That is about profit that you’d expect a scalper to realize. Due to adverse selection, a market maker typically doesn’t earn the full quoted spread.  On only one of these days is the average profit per round turn more than a tick, and then just barely.

Third, there is one day (4 August, 2011) where he earned a whopping 19.6 ticks per round trip ($4 million profit on 16695 buy/sells). I find that hard to believe.

Fourth, there are two days that the government reports the profit but not the volume. One of these days is 6 May, 2010, the Flash Crash day. I find that omission highly suspicious, given that this is the most important day.

Fifth, I again find it odd, and potentially problematic for the government, that it charges him with fraud, manipulation, and spoofing on only 9 days when he allegedly used the layering strategy on about 250 days. How did the government establish that trading on some days was illegal, and on other days it wasn’t?

The most logical explanation of all this is that Sarao was basically scalping-market making-and if he spoofed, he did so to enhance the profitability of this activity, either by scaring off competition at the inside market, or inducing a greater flow of market orders, or both.

One implication of this is that scalping does not tend to cause prices to move one direction or the other. It is passive, and balances buys and sells. This will present great difficulties in pursuing the manipulation charges, though not the spoofing charges and perhaps not the fraud charges.

 

Print Friendly, PDF & Email

Did Spoofing Cause the Flash Crash? Not So Fast!

Filed under: Derivatives,Economics,HFT,Regulation — The Professor @ 12:41 pm
The United States has filed criminal charges against on Navinder Sarao, of London, for manipulation via “spoofing” (in the form of “layering”) and “flashing.” The most attention-grabbing aspect of the complaint is that Sarao engaged in this activity on 6 May, 2010-the day of the Flash Crash. Journalists have run wild with this allegation, concluding that he caused the Crash.

Sarao’s layering strategy involved placement of sell orders at various levels more than two ticks away from the best offer. At his request, “Trading Software Company #1” (I am dying to know who that would be) created an algorithm implemented in a spreadsheet that would cancel these orders if the inside market got close to these resting offers, and replace them with new orders multiple levels away from the new inside market. The algorithm would also cancel orders if the depth in the book at better prices fell below a certain level. Similarly, if the market moved away from his resting orders, those orders would be cancelled and reenetered at the designated distances from the new inside market level.

The complaint is mystifying on the issue of how Sarao made money (allegedly $40 million dollars between 2010 and 2014). To make money, you need to buy low, sell high (you read it here first!), which requires actual transactions. And although the complaint details how many contracts Sarao traded and how many trades (e.g., 10682 buys totaling 74380 lots and 8959 sells totaling 74380 lots on 5 May, 2010-big numbers), it doesn’t say how the trades were executed and what Sarao’s execution strategy was.

The complaint goes into great detail regarding the allegedly fraudulent orders that were never executed, it is maddeningly vague on the trades that were. It says only:

[W]hile the dynamic layering technique exerted downward pressure on the market SARAO typically executed a series of trades to exploit his own manipulative activity by repeatedly selling futures  only to buy them back at a slightly lower price. Conversely, when the market mved back upward as a result of SARAO’s ceasing the dynamic layering technique, SARAO typically did the opposite, that is he repeatedly bought contracts only to sell them at a slightly higher price.

But how were these buys and sells executed? Market orders? Limit orders? Since crossing the spread is expensive, I seriously doubt he used market orders: even if the strategy drove down both bids and offers, using aggressive orders would have forced Sarao to pay the spread, making it impossible to profit. What was the sequence? The complaint suggests that he sold (bought) after driving the price down (up). This seems weird: it would make more sense to do the reverse.

In previous cases, Moncada and Coscia (well-summarized here), the scheme allegedly worked by placing limit orders on both sides of the market in unbalanced quantities, and see-sawing back and forth. For instance, the schemers would allegedly place a small buy order at the prevailing bid, and then put big away from the market orders on the offer side. Once the schemer’s bid was hit, the contra side orders would be cancelled, and he would then switch sides: entering a sell order at the inside market and large away-from-market buys. This strategy is best seen as a way of earning the spread. Presumably its intent is to increase the likelihood of execution of the at-the-market order by using the big contra orders to induce others with orders at the inside market to cancel or reprice. This allowed the alleged manipulators to earn the spread more often than they would have without using this “artifice.”

But we don’t have that detail in Sarao. The complaint does describe the “flashing” strategy in similar terms as in Moncada and Coscia, (i.e., entering limit orders on both sides of the market) but it does not describe the execution strategy in the layering scheme, which the complaint calls “the most prominent manipulative technique he used.”

If, as I conjecture, he was using something like Moncada and Coscia were alleged to have employed, it is difficult to see how his activities would have caused prices to move systematically one direction or the other as the government alleges. Aggressive orders tend to move the market, and if my conjecture is correct, Sarao was using passive orders. Further, he was buying and selling in almost (and sometimes exactly) equal quantities. Trading involving lots of cancellations plus trades in equal quantities at the bid and offer shares similarities with classic market making strategies. This should not move price systematically one way or the other.

But both with regards to the Flash Crash, and 4 May, 2010, the complaint insinuates that Sarao moved the price down:

As the graph displays, SARAO successfully modified nearly all of his orders to stay between levels 4 and 7 of the sell side of the order book. What is more, Exhibit A shows the overall decline in the market price of the E-Minis during this period.

But on 4 May, Sarao bought and sold the exact same number of contracts (65,015). How did that cause price to decline?

Attributing the Flash Crash to his activity is also highly problematic. It smacks of post hoc, ergo propter hoc reasoning. Or look at it this way. The complaint alleges that Sarao employed the layering strategy about 250 days, meaning that he caused 250 out of the last one flash crashes. I can see the defense strategy. When the government expert is on the stand, the defense will go through every day. “You claim Sarao used layering on this day, correct?” “Yes.” “There was no Flash Crash on that day, was there?” “No.” Repeating this 250 times will make the causal connection between his trading and Flash Clash seem very problematic, at best. Yes, perhaps the market was unduly vulnerable to dislocation in response to layering on 6 May, 2010, and hence his strategy might have been the straw that broke the camels back, but that is a very, very, very hard case to make given the very complex conditions on that day.

There is also the issue of who this conduct harmed. Presumably HFTs were the target. But how did it harm them? If my conjecture about the strategy is correct, it increased the odds that Sarao earned the spread, and reduced the odds that HFTs earned the spread. Alternatively, it might have induced some people (HFTs, or others) to submit market orders that they wouldn’t have submitted otherwise. Further, HFT strategies are dynamic, and HFTs learn. One puzzle is why away from the market orders would be considered informative, particularly if they are used frequently in a fraudulent way (i.e., they do not communicate any information). HFTs mine huge amounts of data to detect patterns. The complaint alleges Sarao engaged in a pronounced pattern of trading that certainly HFTs would have picked up, especially since allegations of layering have been around ever since the markets went electronic. This makes it likely that there was a natural self-correcting mechanism that would tend to undermine the profitability of any manipulative strategy.

There are also some interesting legal issues. The government charges Sarao under the pre-Dodd-Frank Section 7 (anti-manipulation) of the Commodity Exchange Act. Proving this manipulation claim requires proof of price artificiality, causation, and intent. The customized software might make the intent easy to prove in this case. But price artificiality and causation will be real challenges, particularly if Sarao’s strategy was similar to Moncada’s and Coscia’s. Proving causation in the Flash Crash will be particularly challenging, given the complex circumstances of that day, and the fact that the government has already laid the blame elsewhere, namely on the Wardell-Reed trades. Causation and artificiality arguments will also be difficult to make given that the government is charging him only for a handful of days that he used the strategy. One suspects some cherry-picking. Then, of course, there is the issue of whether the statute is Constitutionally vague. Coscia recently lost on that issue, but Radley won on it in Houston. It’s an open question.

I am less familiar with Section 18 fraud claims, or the burden of proof regarding them. Even under my conjecture, it is plausible that HFTs were defrauded from earning the spread, or that some traders paid the spread on trades they wouldn’t have made. But if causation is an element here, there will be challenges. It will require showing how HFTs (or other limit order traders) responded to the spoofing. That won’t be easy, especially since HFTs are unlikely to want to reveal their algorithms.

The spoofing charge is based on the post-Frankendodd CEA, with its lower burden of proof (recklessness not intent, and no necessity of proving an artificial price). That will be easier for the government to make stick. That gives the government considerable leverage. But it is largely unexplored territory: this is almost a case of first impression, or at least it is proceeding in parallel with other cases based on this claim, and so there are no precedents.

There are other issues here, including most notably the role of CME and the CFTC. I will cover those in a future post. Suffice it to say that this will be a complex and challenging case going forward, and the government is going to have to do a lot more explaining before it is possible to understand exactly what Sarao did and the impact he had.

 

Print Friendly, PDF & Email

April 21, 2015

Gary Gensler Resurfaces as Hillary!’s CFO: Is He Our Next Treasury Secretary?

Filed under: HFT,Politics — The Professor @ 7:27 pm
At a couple of conferences recently, people asked me what Gary Gensler is up to? I said “I don’t know. It’s not like GiGi and I are buddies.” (True fact: he had me banned from the CFTC building.) Well, now we all know what he’s up to: Gensler has landed as the CFO of Hillary’s presidential campaign.

When Gensler was CFTC chair, I surmised he had ambitions to replace Timmy! as Secretary of the Treasury. But that went to a Rubinoid, Jack Lew. There was also talk of Gensler running for the Senate from Maryland, and Mikulski has announced her retirement, but more well-known Dem pols in the state are poised to run, so that’s not an option.

Taking the campaign CFO job probably does give Gensler an inside track on the coveted SecTreas job. If Hillary wins. If.

Yes, I know she is the odds on favorite. But she was shopping for Oval Office curtains in 2008, and we know how that turned out.

Hillary’s problem is, well, Hillary. A lot of people like the idea of Hilllary. It’s the real person that is the problem.

This has been illustrated by her slow-motion-train-wreck of a campaign kickoff. There’s an old expression: if you can fake sincerity, you have it made. Hillary hasn’t quite mastered that yet. The launch and the comically contrived “spontaneous” road trip to Iowa were about as authentic as Velveeta. It was a remarkable act of will, because you can just tell how much Hillary hates to be with actual people. Further, she has operated in a bubble, protected by some Harry Potteresque charm that repels all serious questions from serious people.

Eventually, though, her personality will shine through. And that’s the problem. Playing word association, if you say “Hillary”, I say: shrill, angry, bitter, entitled, strident, rigid, ideological, dishonest, hyper-partisan, vengeful, arrogant, paranoid, and . . . I could go on. And on. And on. And she’s not that bright: whoever calls her “the smartest woman in the world” is a virulent misogynist, with an obviously low opinion of women. I on the other had, think so highly of women that I would prefer to select the next president by lot from America’s 150 million or so adult females, than by an election in which Hillary is the Democratic Party standard bearer. 150 million-to-one: I’ll take those odds over better than even any day.

She is also an awful politician. She has no political instincts whatsoever. You can see the gears grinding behind her phony grin, trying to figure out what would be the politically advantageous thing to say. Today’s persona is Class Warrior. She recently said the one percenters must be “toppled.” Actually, I could kinda go for that, because despite her past protestations of being as poor as a church mouse, she is definitely in that class now.

In other words, she’s no Bill, who was if nothing else, a natural politician that had a magnetism and suppleness that could overcome his other deficiencies.

Which brings up another issue: the psychodrama between Hillary and Bill. You would think that Bill is a major asset, but I wonder. She wants to win on her own, and has put up with decades of humiliation from him to advance her ambitions: will she put herself in a position where she has to accept his help to win? Nor are Bill’s incentives unmixed. Will he want to play second fiddle as the first First Husband? Hillary’s campaign in 2008 was a soap opera: will 2016 be any different?

Then there’s the old baggage, which Hillary has more of than the lost and found at JFK. (I contributed, in a modest way, to that collection, many years ago, as detailed in the Senate Whitewater Report and the Congressional Record.) It is quite a remarkable record, stretching into the distant past, when she was fired from the Watergate Committee staff, to Arkansas skullduggery, to various White House scandals, to her service as Secretary of State (Benghazi, preventing naming Boko Haram as a terrorist organization, the Reset), to the very present (the stench of cronyism and influence peddling at the Clinton Foundation, and the Immaculate Abortion of her private email server).

Further, she’s not getting any younger, and it shows.

So she has many liabilities. What about the assets? They are formidable, particularly a national media that may not like her, but hates Republicans more. They can be counted on to avoid criticizing her, to form a defensive phalanx around her, and to attack her Republican adversary relentlessly. That didn’t help her in the primaries in 2008, when the fickle press found someone even more attractive. But there is no Barack Obama on offer in 2015-2016.

She also has a relentless fundraising machine, a reliable and experienced party and campaign apparatus, union support, and a solid base who would vote for Godzilla over a Republican.

Thus, she has great institutional advantages that will go far in overcoming her severe personal deficiencies.

But her biggest asset is that you can’t beat somebody with nobody, and right now the Republicans are offering up national nobodies. Maybe a somebody will emerge, but I wouldn’t count on it.

All meaning that although Hillary is a flawed person, and a flawed candidate, she has many advantages. So, as much as it pains me to say so, GiGi’s wish may come true. And as bad as a Gensler Treasury would be, it pains me even more to say that it likely would be one of the best parts of a Hillary Clinton Administration.

Print Friendly, PDF & Email

Next Page »

Powered by WordPress