Streetwise Professor

June 30, 2016

Financial Network Topology and Women of System: A Dangerous Combination

Filed under: Clearing,Derivatives,Economics,Financial crisis,Politics,Regulation — The Professor @ 7:43 pm

Here’s a nice article by Robert Henderson in the science magazine Nautilus which poses the question: “Can topology prevent the next financial crisis?” My short answer: No.  A longer answer–which I sketch out below–is that a belief that it can is positively dangerous.

The idea behind applying topology to the financial system is that financial firms are interconnected in a network, and these connections can be represented in a network graph that can be studied. At least theoretically, if you model the network formally, you can learn its properties–e.g., how stable is it? will it survive certain shocks?–and perhaps figure out how to make the network better.

Practically, however, this is an illustration of the maxim that a little bit of knowledge is a dangerous thing.

Most network modeling has focused on counterparty credit connections between financial market participants. This research has attempted to quantify these connections and graph the network, and ascertain how the network responds to certain shocks (e.g., the bankruptcy of a particular node), and how a reconfigured network would respond to these shocks.

There are many problems with this. One major problem–which I’ve been on about for years, and which I am quoted about in the Nautilus piece–is that counterparty credit exposure is only one type of many connections in the financial network: liquidity is another source of interconnection. Furthermore, these network models typically ignore the nature of the connections between nodes. In the real world, nodes can be tightly coupled or loosely coupled. The stability features of tightly and loosely connected networks can be very different even if their topologies are identical.

As a practical example, not only does mandatory clearing change the topology of a network, it also changes the tightness of the coupling through the imposition of rigid variation margining. Tighter coupling can change the probability of the failure of connections, and the circumstances under which these failures occur.

Another problem is that models frequently leave out some participants. As another practical example, network models of derivatives markets include the major derivatives counterparties, and find that netting reduces the likelihood of a cascade of defaults within that network. But netting achieves this by redistributing the losses to other parties who are not explicitly modeled. As a result, the model is incomplete, and gives an incomplete understanding of the full effects of netting.

Thus, any network model is inherently a very partial one, and is therefore likely to be a very poor guide to understanding the network in all its complexity.

The limitations of network models of financial markets remind me of the satirical novel Flatland, where the inhabitants of Pointland, Lineland, and Flatland are flummoxed by higher-dimensional objects. A square finds it impossible to conceptualize a sphere, because he only observes the circular section as it passes through his plane. But in financial markets the problem is much greater because the dimensionality is immense, the objects are not regular and unchanging (like spheres) but irregular and constantly changing on many dimensions and time scales (e.g., nodes enter and exit or combine, nodes can expand or contract, and the connections between them change minute to minute).

This means that although network graphs may help us better understand certain aspects of financial markets, they are laughably limited as a guide to policy aimed at reengineering the network.

But frighteningly, the Nautilus article starts out with a story of Janet Yellen comparing a network graph of the uncleared CDS market (analogized to a tangle of yarn) with a much simpler graph of a hypothetical cleared market. Yellen thought it was self-evident that the simple cleared market was superior:

Yellen took issue with her ball of yarn’s tangles. If the CDS network were reconfigured to a hub-and-spoke shape, Yellen said, it would be safer—and this has been, in fact, one thrust of post-crisis financial regulation. The efficiency and simplicity of Kevin Bacon and Lowe’s Hardware is being imposed on global derivative trading.

 

God help us.

Rather than rushing to judgment, a la Janet, I would ask: “why did the network form in this way?” I understand perfectly that there is unlikely to be an invisible hand theorem for networks, whereby the independent and self-interested actions of actors results in a Pareto optimal configuration. There are feedbacks and spillovers and non-linearities. As a result, the concavity that drives the welfare theorems is notably absent. An Olympian economist is sure to identify “market failure,” and be mightily displeased.

But still, there is optimizing behavior going on, and connections are formed and nodes enter and exit and grow and shrink in response to profit signals that are likely to reflect costs and benefits, albeit imperfectly. Before rushing in to change the network, I’d like to understand much better why it came to be the way it is.

We have only rudimentary understanding of how network configurations develop. Yes, models that specify simple rules of interaction between nodes can be simulated to produce networks that differ substantially from random networks. These models can generate features like the small world property. But it is a giant leap to go from that, to understanding something as huge, complex, and dynamic as a financial system. This is especially true given that there are adjustment costs that give rise to hysteresis and path-dependence, as well as shocks that give rise to changes.

Further, let’s say that the Olympian economist Yanet Jellen establishes that the existing network is inefficient according to some criterion (not that I would even be able to specify that criterion, but work with me here). What policy could she adopt that would improve the performance of the network, let alone make it optimal?

The very features–feedbacks, spillovers, non-linearities–that can create suboptimality  also make it virtually impossible to know how any intervention will affect that network, for better or worse, under the myriad possible states in which that network must operate.  Networks are complex and emergent and non-linear. Changes to one part of the network (or changes to the the way that agents who interact to create the network must behave and interact) can have impossible to predict effects throughout the entire network. Small interventions can lead to big changes, but which ones? Who knows? No one can say “if I change X, the network configuration will change to Y.” I would submit that it is impossible even to determine the probability distribution of configurations that arise in response to policy X.

In the language of the Nautilus article, it is delusional to think that simplicity can be “imposed on” a complex system like the financial market. The network has its own emergent logic, which passeth all understanding. The network will respond in a complex way to the command to simplify, and the outcome is unlikely to be the simple one desired by the policymaker.

In natural systems, there are examples where eliminating or adding a single species may have little effect on the network of interactions in the food web. Eliminating one species may just open a niche that is quickly filled by another species that does pretty much the same thing as the species that has disappeared. But eliminating a single species can also lead to a radical change in the food web, and perhaps its complete collapse, due to the very complex interactions between species.

There are similar effects in a financial system. Let’s say that Yanet decides that in the existing network there is too much credit extended between nodes by uncollateralized derivatives contracts: the credit connections could result in cascading failures if one big node goes bankrupt. So she bans such credit. But the credit was performing some function that was individually beneficial for the nodes in the network. Eliminating this one kind of credit creates a niche that other kinds of credit could fill, and profit-motivated agents have the incentive to try to create it, so a substitute fills the vacated niche. The end result: the network doesn’t change much, the amount of credit and its basic features don’t change much, and the performance of the network doesn’t change much.

But it could be that the substitute forms of credit, or the means used to eliminate the disfavored form of credit (e.g., requiring clearing of derivatives), fundamentally change the network in ways that affect its performance, or at least can do so in some states of the world. For example, it make the network more tightly coupled, and therefore more vulnerable to precipitous failure.

The simple fact is that anybody who thinks they know what is going to happen is dangerous, because they are messing with something that is very powerful that they don’t even remotely understand, or understand how it will change in response to meddling.

Hayek famously said “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Tragically, too many (and arguably a large majority of) economists are the very antithesis of what Hayek says that they should be. They imagine themselves to be designers, and believe they know much more than they really do.

Janet Yellen is just one example, a particularly frightening one given that she has considerable power to implement the designs she imagines. Rather than being the Hayekian economist putting the brake on ham-fisted interventions into poorly understood symptoms, she is far closer to Adam Smith’s “Man of System”:

The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamoured with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it. He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might chuse to impress upon it. If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.

When there are Men (or Women!) of System about, and the political system gives them free rein, analytical tools like topology can be positively dangerous. They make some (unjustifiably) wise in their own conceit, and give rise to dreams of Systems that they attempt to implement, when in fact their knowledge is shockingly superficial, and implementing their Systems is likely to create the highest degree of disorder.

Print Friendly

17 Comments »

  1. Adding to the product line first Climate In A Box and now Global Economy In A Box-Gosplan 2.0.

    If x then y-simple as that.

    Comment by pahoben — July 1, 2016 @ 7:37 am

  2. Rumsfeld had comments applicable to market topology. In simulation the system description is limited by (1)things you know you don’t know and (2)things you don’t know that you don’t know so the results do not fully reflect the system being simulated. The things you know you don’t know provides some indication of the limit of the results but the things you don’t know that you don’t know that can be catastrophic.

    I know how difficult it is to model systems simpler than this when you cannot fully describe the system. The sad part is that many people don’t like to think and so grab model output results like a drowning man grabbing a life preserver and make bad decisions based on the output. This cascades simply because people are desperate for a forecast of the behavior of complex systems that cannot be reliably modeled and to justify their actions at a later date when things go south by pointing to model results.

    It looks like Yellen runs the Fed by Powerpoint and so the slides that look best have most impact on policy. Some organizations seem to consist consist primarily of people sitting around trying to poison each other with Powerpoint slides but doubt the Fed has yet adavanced to that stage of Powerpoint development.

    Comment by pahoben — July 1, 2016 @ 9:01 am

  3. My point in the last paragraph was that since these topology slides look very good there is higher probability they will be used to create policy.

    Comment by pahoben — July 1, 2016 @ 9:32 am

  4. I’ve been thinking a lot about this lately as I try to get my hands around what the global banking system is. (I thought I had a general idea of what the shadow banking system is but now realize that I am 3-4 years out of date, so I know very little.) Given that complexity and constant change are hallmarks of the current financial system, shouldn’t we consider mandating more simplicity in the banking system? I’m a big free markets guy, but the financial crisis led to a LOT of lost GDP and a whole lot of money going to people who shouldn’t have gotten it.

    I guess that I’m asking whether financial innovation is “worth it”. Now, I quiver as I write this. I hate rent-seeking and know where government regulations lead, but we didn’t have a major banking crisis from the end of the Great Depression to Continental Illinois. What did we do right? Can we do it again?

    Comment by Highgamma — July 1, 2016 @ 11:13 am

  5. If something can be studied, it implies the events of the past are being measured and analyzed (otherwise they would be predicted, not studied).

    Anything dealing with past events can only be projected forward with the caveat that human behavior is irrational and unpredictable.

    Good luck trying to predict irrational behavior based on past events.

    This is what we get when computer geeks are left too long among themselves. Let them actually go on a few dates and they will understand the power of irrationality. When computer geeks accurately model human dating interaction, they will have the beginnings of modeling decision making on financial matters. Human dating interactions are FAR more predictable than irrational behavior concerning money, even though they are far too often one and the same thing.

    Financial crises are nothing other than manifestations of irrational human behavior. Call me when computer geeks begin to predictably model irrational human behavior. Model human behavior in a strip club. Once you have done that, I will listen to your theories about modeling financial crises.

    Comment by Charles — July 1, 2016 @ 4:49 pm

  6. Imagine that you are a bank supervisor minding one’s own business confident that the banks you oversee are financially secure because they meet all the regulatory minimums for liquidity and risk based capital (measured at end of day because you find intraday exposures too complex to bother with). Then the financial crisis hits and you are telling Treasury that because of connectivity in uncleared markets (and cleared because you had no idea those existed) that bail outs are needed.

    All of a sudden simplifying that connectivity is paramount. Not because it will yield efficiency dividends, or be bullet proof, but to restore that feeling of control of measurable risk that you used to have. As an afterthought you tack on trade reporting to capture end users and pretend that trade warehouses are systemically important.

    Just a cynical musing.

    Comment by noir — July 2, 2016 @ 5:59 pm

  7. @noir. If you are being cynical, you’ve come to the right place! As Lily Tomlin said, we try to be cynical, but it’s hard to keep up!

    The ProfessorComment by The Professor — July 3, 2016 @ 7:18 pm

  8. Networks like this evolve to be stable through exactly one mechanism: surviving occasional shocks. For any given shock, links and nodes which are too weak are cleared out. Over time, strong links and strong nodes develop — with the occasional shock clearing out the undergrowth or decayed oldies to make room for new nodes and links.

    In other words, it takes actual defaults and minor crashes to create a system resilient to major crashes.

    Networks like Yellen’s star are optimised for good conditions at the cost of fragility: effectively, it’s a glass cannon. What she wants is an ecosystem. Those cannot be engineered, but only managed with restraint. Think gardening: it’s more pruning, weeding and insecticide than it is fertiliser and bonsai trees.

    Comment by no — July 4, 2016 @ 7:09 am

  9. @Professor
    On the anti-cynicism front (imagine that). I live outside of the US in a place that has terrible problems with terrorism and other severe social problems. Today the people I work with gave me a beautiful big cake that looked like Uncle Sam’s hat and had a US flag on top and best birthday wishes for the US written under the flag. It was very touching.

    Comment by pahoben — July 4, 2016 @ 7:26 am

  10. @ no

    A parallel – or even parable – that occurs to me (because I’m just such a geek) is Japanese WW2 aircraft carriers (work with me here).

    At Midway in 1942, the Japanese Navy lost 47% of its entire aircraft carrier tonnage in one day when four fleet carriers were destroyed by just nine bombs on target. By way of comparison, that would be proportionately like the USN losing four CVNs in one day today.

    The bombs came through the wooden flight decks, blew up inside the hangars and set the avgas on fire. The hangar design had big sliding screens to let avgas fumes out and fresh air in. When the munitions went off too, more air came in through the holes. So the fires burned and burned, and when they did manage briefly to extinguish them in one area or another, the latent heat was so intense that the fuel vapor spontaneously reignited. You then got fuel-air explosions *inside* the hangar.

    The lesson the IJN took from this was that its carriers were vulnerable to dive bombers. The replacement generation of carriers was therefore designed to be proof against dive bombers. The first ship, ‘Taihō’, had a steel deck two inches thick, a hurricane bow and a completely enclosed hangar deck. Nothing would pierce the deck armor, and if it did, it would be contained. The avgas fuel tanks were incorporated into the ship’s structure and shielded by concrete (concrete. As a shipbuilding material. Who knew?) She was so heavily subdivided that normal ventilation didn’t work and elaborate air conditioning was fitted.

    Needless to say, the ‘Taihō’, a ship invulnerable to bombs dropped from an airplane, was torpedoed by a submarine. You have to laugh.

    At first nothing much happened, but the shock was transmitted through the concrete and fractured the avgas tanks. Fumes started to spread through the ship, so someone switched on the air conditioning to get rid of the smell. This distributed fuel vapor everywhere, and boooooooooom.

    The USN solution to the explodiness of aircraft carriers was to have proper safety procedures around the handling of fuel and munitions, and to make damage control everybody’s job. As a result, USN carrier design did not need to change except that it was found to be handy if they could be a bit bigger. Nonetheless the Big E was a viable warship in 1945 (having survived lots of bombs). The ships weren’t especially tough, they just had excellent crews and excellent emergency responses.

    When I think of dumbass regulatory initiatives I think of this. The IJN drew the wrong lesson from the 1942 disaster, and even if the lesson it drew had by luck been the right one, its measures failed on several levels. The carriers weren’t any less prone to fuel-air explosions inside the ship; they just came about in a new way. The carriers were actually made less effective by the measures because the weight of the armoured flight deck that high up meant they could only have single decker hangars, hence only about half the airplanes. Worse, adopting the wrong course made it impossible to adopt the right one. Damage control in the IJN was a department and continued to be, so it was common for the entire damage control team to get wiped out by the secondary explosions as they went to deal with the primary.

    The reasons for this are many but essentially you had a cargo-cult mentality (it looks like a carrier so it will function as one), an organizational propensity to do something–anything, a non-expert’s arrogance in presuming to know everything and to need no lessons from the enemy, and an institutional inability to admit to having fucked up.

    2008 was Midway. Another is no less likely, perhaps more so, and quite likely worse than 2008.

    Comment by Green As Grass — July 4, 2016 @ 11:43 am

  11. @Green-excellent analogy, and speaking as a fellow geek, one that (a) I am very familiar with, and (b) like a lot.

    I am currently digging into the theory of “normal accidents.” The Taiho is a classic example of that. In normal accidents, one damn thing leads to another, and frequently the very things that are intended to be safety measures turn a bad but routine situation into a catastrophe.

    They made the ship more complex, which (a) meant there were more things that could go wrong, and (b) there were more ways a crew reacting to a novel situation could well and truly fuck it up.

    I am not aware of ANY world financial regulator taking a normal accidents/complex systems approach to the analysis of how the post-2008 reconfiguration of the financial system could go horribly wrong. They are exactly like the Japanese naval engineers, taking comfort in their concrete deck and compartmentalization. And as you say, as a result the next crisis could well be worse.

    I think a couple of years ago we had a discussion about the role the volatile Tarakan crude the Taiho used as fuel played in the disaster.

    The ProfessorComment by The Professor — July 4, 2016 @ 1:44 pm

  12. Well, picking up on the damage control analogy, one of the planks of financial regulation is to make everyone (shareholders, direct participants and prospectively indirect participants) bear the burden. You don’t want your skin in the game singed.

    Comment by noir — July 4, 2016 @ 7:58 pm

  13. […] – Financial network topology and Women of System – a dangerous combination. […]

    Pingback by Further reading | Culture Across — July 4, 2016 @ 11:09 pm

  14. @ noir

    US damage control assigned every crew member a damage control station. You didn’t have one team tasked with putting out fires and shoring up bulkheads; everyone did it. It was simply not possible for a US ship to lose its damage control capability.

    The parable here is that the IJN assumption was that another disaster would start to play out exactly like the last one. So if — went their thinking — you forestalled that in its early stages, by having flight decks that bombs would only dent, buckle or bounce off of, you would avert the next disaster.

    In reality, there was no reason at all why the next disaster must start like the last. An approach that assumed it would risked making the system more vulnerable — in ways the IJN hadn’t thought of, because they weren’t thinking.

    Your ship, as well as being less useful qua aircraft carrier (your armored flight deck reduced the size of your air group, which reduced the number of fighters in the air, which made you more likely to get hit by air attack in the first place) thus has new vulnerabilities you don’t know about. Ventilation systems designed to remove flammable vapour did so by connecting the entire ship topologically to that vapour, so a flame in the kitchen could trigger an explosion 700 feet and 5 decks away.

    A thinking person who spoke his mind (nobody often did either in the WW2 IJN) could have pointed out that last one. The explosion risk had been moved away from the hangars, but now existed *everywhere* else. But you weren’t allowed to say that, and the wrong answer they groupthought their way to menat there was no need to overhaul damage control procedures, because there’d be no further damage.

    The parallels to regulation post-WFC are excruciating. They even had a position limit on the deck spot IIRC.

    Comment by Green As Grass — July 5, 2016 @ 3:31 am

  15. @Green-When I was in the Naval Academy, damage control was drilled into us, literally and figuratively. It has long been a religion in the USN. On ships, usually the sharpest junior officer is DCO.

    Your remark about the Japanese reminds me that in evaluating damaged bombers that returned to the UK, the original focus was on strengthening the parts of the aircraft where the most bullet or flak holes were usually found. Then some clever guy figured out that er, those are the parts of the planes that are strong enough, as evidenced by the fact that the planes routinely made it back with damage in those places. It’s where damage was seldom found that needed fixing, because (under the assumption that hits were likely evenly distributed over an aircraft) the planes hit in those places seldom made it back.

    The ProfessorComment by The Professor — July 6, 2016 @ 10:44 am

  16. @ prof

    Non-linear thinking of that variety is a rare find. The oil company I worked for years ago used to develop or redevelop 50 or 100 filling stations a year and used to monitor “SPTE”, for “sales performance versus thruput estimate”. The thinking was that we need a throughput of X million gallons a year to justify spending Y million on the (re)development. So we should monitor how many gallons we’re actually getting, the sales performance against the thruput estimate, and consider remedial action if there’s a shortfall.

    So far so good, but the comedy was that if Fred’s Filling Station was only doing 80% of target, i.e. was failing, the remedial action – sales promotions, whatever – were all focused on improving the volumes *at Fred’s Filling Station*. I made myself very unpopular – to the point where I had to switch to a completely different division eventually – by pointing out that the obvious places to promote were the ones that were succeeding, not failing, because what mattered was the total sales, not that they occur where we said they would. If you have 2 sites whose target was 100 apiece, and one actually does 50 and the other 140, which is the easier win – a 20% uplift at the crap one, or a 7% uplift at the one that’s booming, either of which delivers the missing 10? If you’ve got a kickass promotion that can boost sales by 20%, why would you not deploy it at the one doing 140?

    Did World War One generals get good results from reinforcing failure?

    I also questioned whether something in the way we incentivized sales reps might be causing our problem. If they deliberately overstated the volume, was their project more likely to be approved? Were they named Rep of the Year and given a better car for being a hot shot, so that it’s now embarrassing to fire them? Would they have been promoted out of blame’s way by the time the chickens came home to roost?

    What was notable about that organization – and you see this in companies, in regulators, in 1940s Asian navies and even in the Church – is that while you may well have imaginative/observant/lateral thinkers adjacent to the problem, they are of zero value unless they are listened to further up, where the executive choices are made. When an institution fucks up, it is pretty rare for *nobody* in it to have seen it coming. Usually, those who did were ignored, and/or told to STFU, or knew they would be so said nothing (the IJN).

    Delayering an organisation and weakening its hierarchies are the only solution to this, but with all the above, and especially with regulators, there is a structural bias in the exact opposite direction.

    Comment by Green As Grass — July 7, 2016 @ 9:44 am

  17. One problem of the simplified topology idea, in my view, is the idea that “complexity” is synonymous with “looks confusing.”

    Gregg Berman, the SEC’s sort of “physicist in residence” from many years ago, was very fond of talking about how the iPhone worked really well but was complex (and therefore complexity in the markets wasn’t the problem). Sure, the iPhone has a lot of parts, but on many scales it is, in fact, very well behaved.

    Interactive complexity is the more important parameter – that is, the likelihood of unexpected and confusing interactions. Your point about the hidden complexity of simplified topologies is a great one. Sure, for some criteria a certain topology may be simplifier, but there’s so much that these models can’t capture that for another set of parameters that we care about it may be deeply troubling.

    And when we start to muck with topology, as you pointed out, tight coupling is often the result.

    Comment by Chris Clearfield — July 7, 2016 @ 11:06 am

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress