Wednesday, 20 March 2013

Third (and final) excerpt...

The third (and, you'll all be pleased to hear, final!) excerpt of my book was published in Bloomberg today. The title is "Toward a National Weather Forecaster for Finance" and explores (briefly) the topic of what might be possible in economics and finance in creating national (and international) centers devoted to data intensive risk analysis and forecasting of socioeconomic "weather."

Before anyone thinks I'm crazy, let me make very clear that I'm using the term "forecasting" in it's general sense, i.e. of making useful predictions of potential risks as they emerge in specific areas, rather than predictions such as "the stock market will collapse at noon on Thursday." I think we can all agree that the latter kind of prediction is probably impossible (although Didier Sornette wouldn't agree), and certainly would be self-defeating were it made widely known. Weather forecasters make much less specific predictions all the time, for example, of places and times where conditions will be ripe for powerful thunderstorms and tornadoes. These forecasts of potential risks are still valuable, and I see no reason similar kinds of predictions shouldn't be possible in finance and economics. Of course, people make such predictions all the time about financial events already. I'm merely suggesting that with effort and the devotion of considerable resources for collecting and sharing data, and building computational models, we could develop centers acting for the public good to make much better predictions on a more scientific basis.

As a couple of early examples, I'll point to the recent work on complex networks in finance which I've touched on here and here. These are computationally intensive studies demanding excellent data which make it possible to identify systemically important financial institutions (and links between them) more accurately than we have in the past. Much work remains to make this practically useful.

Another example is this recent and really impressive agent based model of the US housing market, which has been used as a "post mortem" experimental tool to ask all kinds of "what if?" questions about the housing bubble and its causes, helping to tease out better understanding on controversial questions. As the authors note, macroeconomists really didn't see the housing market as a likely source of large-scale macroeconomic trouble. This model has made it possible to ask and explore questions that cannot be explored with conventional economic models:
 Not only were the Macroeconomists looking at the wrong markets, they might have been looking at the wrong variables. John Geanakoplos (2003, 2010a, 2010b) has argued that leverage and collateral, not interest rates, drove the economy in the crisis of 2007-2009, pushing housing prices and mortgage securities prices up in the bubble of 2000-2006, then precipitating the crash of 2007. Geanakoplos has also argued that the best way out of the crisis is to write down principal on housing loans that are underwater (see Geanakoplos-Koniak (2008, 2009) and Geanakoplos (2010b)), on the grounds that the loans will not be repaid anyway, and that taking into account foreclosure costs, lenders could get as much or almost as much money back by forgiving part of the loans, especially if stopping foreclosures were to lead to a rebound in housing prices.

There is, however, no shortage of alternative hypotheses and views. Was the bubble caused by low interest rates, irrational exuberance, low lending standards, too much refinancing, people not imagining something, or too much leverage? Leverage is the main variable that went up and down along with housing prices. But how can one rule out the other explanations, or quantify which is more important? What effect would principal forgiveness have on housing prices? How much would that increase (or decrease) losses for investors? How does one quantify the answer to that question?

Conventional economic analysis attempts to answer these kinds of questions by building equilibrium models with a representative agent, or a very small number of representative agents. Regressions are run on aggregate data, like average interest rates or average leverage. The results so far seem mixed. Edward Glaeser, Joshua Gottlieb, and Joseph Gyourko (2010) argue that leverage did not play an important role in the run-up of housing prices from 2000-2006. John Duca, John Muellbauer, and Anthony Murphy (2011), on the other hand, argue that it did. Andrew Haughwout et al (2011) argue that leverage played a pivotal role.

In our view a definitive answer can only be given by an agent-based model, that is, a model in which we try to simulate the behavior of literally every household in the economy. The household sector consists of hundreds of millions of individuals, with tremendous heterogeneity, and a small number of transactions per month. Conventional models cannot accurately calibrate heterogeneity and the role played by the tail of the distribution. ... only after we know what the wealth and income is of each household, and how they make their housing decisions, can we be confident in answering questions like: How many people could afford one house who previously could afford none? Just how many people bought extra houses because they could leverage more easily? How many people spent more because interest rates became lower? Given transactions costs, what expectations could fuel such a demand? Once we answer questions like these, we can resolve the true cause of the housing boom and bust, and what would happen to housing prices if principal were forgiven.

... the agent-based approach brings a new kind of discipline because it uses so much more data. Aside from passing a basic plausibility test (which is crucial in any model), the agent-based approach allows for many more variables to be fit, like vacancy rates, time on market, number of renters versus owners, ownership rates by age, race, wealth, and income, as well as the average housing prices used in standard models. Most importantly, perhaps, one must be able to check that basically the same behavioral parameters work across dozens of different cities. And then at the end, one can do counterfactual reasoning: what would have happened had the Fed kept interest rates high, what would happen with this behavioral rule instead of that.

The real proof is in the doing. Agent-based models have succeeded before in simulating traffic and herding in the flight patterns of geese. But the most convincing evidence is that Wall Street has used agent-based models for over two decades to forecast prepayment rates for tens of millions of individual mortgages.
This is precisely the kind of work I think can be geared up and extended far beyond the housing market, augmented with real time data, and used to make valuable forecasting analyses. It seems to me actually to be the obvious approach.
 

Tuesday, 19 March 2013

Second excerpt...

A second excerpt of my forthcoming book Forecast is now online at Bloomberg. It's a greatly condensed text assembled from various parts of the book. One interesting exchange in the comments from yesterday's excerpt:
Food For Thought commented....Before concluding that economic theory does not include analysis of unstable equilibria check out the vast published findings on unstable equilibria in the field of International Economics.  Once again we have someone touching on one tiny part of economic theory and drawing overreaching conclusions. 

I would expect a scientist would seek out more evidence before jumping to conclusions.
to which one Jack Harllee replied...
Sure, economists have studied unstable equilibria. But that's not where the profession's heart is. Krugman summarized rather nicely in 1996, and the situation hasn't changed much since then:
"Personally, I consider myself a proud neoclassicist. By this I clearly don't mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium. The reason I like that kind of model is not that I believe it to be literally true, but that I am intensely aware of the power of maximization-and-equilibrium to organize one's thinking - and I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy. ...That said, there are indeed economists who regard maximization and equilibrium as more than useful fictions. They regard them either as literal truths - which I find a bit hard to understand given the reality of daily experience - or as principles so central to economics that one dare not bend them even a little, no matter how useful it might seem to do so."
This response fairly well captures my own position. I argue in the book that the economics profession has been fixated far too strongly on equilibrium models, and much of the time simply assumes the stability of such equilibria without any justification. I certainly don't claim that economists have never considered unstable equilibria (or examined models with multiple equilibria). But any examination of the stability of an equilibrium demands some analysis of dynamics of the system away from equilibrium, and this has not (to say the least) been a strong focus of economic theory.   

Monday, 18 March 2013

New territory for game theory...

This new paper in PLoS looks fascinating. I haven't had time yet to study it in detail, but it appears to make an important demonstration of how, when thinking about human behavior in strategic games, fixed point or mixed strategy Nash equilibria can be far too restrictive and misleading, ruling out much more complex dynamics, which in reality can occur even for rational people playing simple games: 

Abstract

Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems.

...and from the conclusions, ...

Cycles in the belief space of learning agents have been predicted for many years, particularly in games with intransitive dominance relations, like Matching Pennies and Rock-Paper-Scissors, but experimentalists have only recently started looking to these dynamics for experimental predictions. This work should function to caution experimentalists of the dangers of treating dynamics as ephemeral deviations from a static solution concept. Periodic behavior in the Mod Game, which is stable and efficient, challenges the preconception that coordination mechanisms must converge on equilibria or other fixed-point solution concepts to be promising for social applications. This behavior also reveals that iterated reasoning and stable high-dimensional dynamics can coexist, challenging recent models whose implementation of sophisticated reasoning implies convergence to a fixed point [13]. Applied to real complex social systems, this work gives credence to recent predictions of chaos in financial market game dynamics [8]. Applied to game learning, our support for cyclic regimes vindicates the general presence of complex attractors, and should help motivate their adoption into the game theorist’s canon of solution concepts

Book excerpt...

Bloomberg is publishing a series of excerpts from my forthcoming book, Forecast, which is now due out in only a few days. The first one was published today.

Secrets of Cyprus...

Just something to think about when scratching your head over the astonishing developments in Cyprus, which seem to be more or less intentionally designed to touch off bank runs in several European nations. Why? Courtesy of Zero Hedge:
...news is now coming out that the Cyprus parliament has postponed the decision and may in fact not be able to reach agreement. They may tinker with the percentages, to penalize smaller savers less (and larger savers more). However, the damage is already done. They have hit their savers with a grievous blow, and this will do irreparable harm to trust and confidence.

As well it should! In more civilized times, there was a long established precedent regarding the capital structure of a bank. Equity holders incur the first losses as they own the upside profits and capital gains. Next come unsecured creditors who are paid a higher interest rate, followed by secured bondholders who are paid a lower interest rate. Depositors are paid the lowest interest rate of all, but are assured to be made whole, even if it means every other class in the capital structure is utterly wiped out.

As caveat to the following paragraph, I acknowledge that I have not read anything definitive yet regarding bondholders. I present my assumptions (which I think are likely correct).

As with the bankruptcy of General Motors in the US, it looks like the rule of law and common sense has been recklessly set aside. The fruit from planting these bitter seeds will be harvested for many years hence. As with GM, political expediency drives pragmatic and ill-considered actions. In Cyprus, bondholders include politically connected banks and sovereign governments.  Bureaucrats decided it would be acceptable to use depositors like sacrificial lambs. The only debate at the moment seems to be how to apportion the damage amongst “rich” and “non-rich” depositors.

Also, much more on the matter here, mostly expressing similar sentiments. And do read The War On Common Sense by Tim Duy:
This weekend, European policymakers opened up a new front in their ongoing war on common sense.  The details of the Cyprus bailout included a bail-in of bank depositors, small and large alike.  As should have been expected, chaos ensued as Cypriots rushed to ATMs in a desperate attempt to withdraw their savings, the initial stages of what is likely to become a run on the nation's banks.  Shocking, I know.  Who could have predicted that the populous would react poorly to an assault on depositors?

Everyone.  Everyone would have predicted this.  Everyone except, apparently, European policymakers....
 

Friday, 15 March 2013

Beginning of the end for big banks?

If the biggest banks are too big to fail, too connected to fail, too important to prosecute, and also too complex to manage, it would seem sensible to scale them down in size, and to reduce their centrality and the complexity of their positions. Simon Johnson has an encouraging article suggesting that at least some of this may actually be about to happen: 
The largest banks in the United States face a serious political problem. There has been an outbreak of clear thinking among officials and politicians who increasingly agree that too-big-to-fail is not a good arrangement for the financial sector.

Six banks face the prospect of meaningful constraints on their size: JPMorgan Chase, Bank of America, Citigroup, Wells Fargo, Goldman Sachs and Morgan Stanley. They are fighting back with lobbying dollars in the usual fashion – but in the last electoral cycle they went heavily for Mitt Romney (not elected) and against Elizabeth Warren and Sherrod Brown for the Senate (both elected), so this element of their strategy is hardly prospering.

What the megabanks really need are some arguments that make sense. There are three positions that attract them: the Old Wall Street View, the New View and the New New View. But none of these holds water; the intellectual case for global megabanks at their current scale is crumbling.
Most encouraging is the emergence of a real discussion over the implicit taxpayer subsidy given to the largest banks. See also this editorial in Bloomberg from a few weeks ago:
On television, in interviews and in meetings with investors, executives of the biggest U.S. banks -- notably JPMorgan Chase & Co. Chief Executive Jamie Dimon -- make the case that size is a competitive advantage. It helps them lower costs and vie for customers on an international scale. Limiting it, they warn, would impair profitability and weaken the country’s position in global finance.

So what if we told you that, by our calculations, the largest U.S. banks aren’t really profitable at all? What if the billions of dollars they allegedly earn for their shareholders were almost entirely a gift from U.S. taxpayers?

... The top five banks -- JPMorgan, Bank of America Corp., Citigroup Inc., Wells Fargo & Co. and Goldman Sachs Group Inc. - - account for $64 billion of the total subsidy, an amount roughly equal to their typical annual profits (see tables for data on individual banks). In other words, the banks occupying the commanding heights of the U.S. financial industry -- with almost $9 trillion in assets, more than half the size of the U.S. economy -- would just about break even in the absence of corporate welfare. In large part, the profits they report are essentially transfers from taxpayers to their shareholders.
So much for the theory that the big banks need to pay big bonuses so they can attract that top financial talent on which their success depends. Their success seems to depend on a much simpler recipe.

This paper also offers some interesting analysis on different practical steps that might be taken to end this ridiculous situation.

Tuesday, 12 March 2013

Megabanks: too complex to manage

Having come across Chris Arnade, I'm currently reading everything I can find by him. On this blog I've touched on the matter of financial complexity many times, but mostly in the context of the network of linked institutions. I've never considered the possibility that the biggest financial institutions are themselves now too complex to be managed in any effective way. In this great article at Scientific American, Arnade (who has 20 years experience working in Wall St.) makes a convincing case that the largest banks are now invested in so many diverse products of such immense complexity that they cannot possibly manage their risks:
This is far more common on Wall Street than most realize. Just last year JP Morgan revealed a $6 billion loss from a convoluted investment in credit derivatives. The post mortem revealed that few, including the actual trader, understood the assets or the trade. It was even found that an error in a spreadsheet was partly responsible.

Since the peso crisis, banks have become massive, bloated with new complex financial products unleashed by deregulation. The assets at US commercial banks have increased five times to $13 trillion, with the bulk clustered at a few major institutions. JP Morgan, the largest, has $2.5 trillion in assets.

Much has been written about banks being “too big to fail.” The equally important question is are they “too big to succeed?” Can anyone honestly risk manage $2 trillion in complex investments?

To answer that question it’s helpful to remember how banks traditionally make money: They take deposits from the public, which they lend out longer term to companies and individuals, capturing the spread between the two.

Managing this type of bank is straightforward and can be done on spreadsheets. The assets are assigned a possible loss, with the total kept well beneath the capital of the bank. This form of banking dominated for most of the last century, until the recent move towards deregulation.

Regulations of banks have ebbed and flowed over the years, played out as a fight between the banks’ desire to buy a larger array of assets and the government’s desire to ensure banks’ solvency.

Starting in the early 1980s the banks started to win these battles resulting in an explosion of financial products. It also resulted in mergers. My old firm, Salomon Brothers, was bought by Smith Barney, which was bought by Citibank.

Now banks no longer just borrow to lend to small businesses and home owners, they borrow to trade credit swaps with other banks and hedge funds, to buy real estate in Argentina, super senior synthetic CDOs, mezzanine tranches of bonds backed by the revenues of pop singers, and yes, investments in Mexico pesos. Everything and anything you can imagine.

Managing these banks is no longer simple. Most assets now owned have risks that can no longer be defined by one or two simple numbers. They often require whole spreadsheets. Mathematically they are vectors or matrices rather than scalars.

Before the advent of these financial products, the banks’ profits were proportional to the total size of their assets. The business model scaled up linearly. There were even cost savings associated with a larger business.

This is no longer true. The challenge of risk managing these new assets has broken that old model.

Not only are the assets themselves far harder to understand, but the interplay between the different assets creates another layer of complexity.

In addition, markets are prone to feedback loops. A bank owning enough of an asset can itself change the nature of the asset. JP Morgan’s $6 billion loss was partly due to this effect. Once they had began to dismantle the trade the markets moved against them. Put another way, other traders knew JP Morgan were in pain and proceeded to ‘shove it in their faces’.

Bureaucracy creates another layer, as does the much faster pace of trading brought about by computer programs. Many risk managers will privately tell you that knowing what they own is as much a problem as knowing the risk of what is owned.

Put mathematically, the complexity now grows non-linearly. This means, as banks get larger, the ability to risk-manage the assets grows much smaller and more uncertain, ultimately endangering the viability of the business.

Strategic recklessness

Some poignant (and infuriating) insight from Chris Arnade on Why it's smart to be reckless on Wall St.:
... asymmetry in pay (money for profits, flat for losses) is the engine behind many of Wall Street’s mistakes. It rewards short-term gains without regard to long-term consequences. The results? The over-reliance on excessive leverage, banks that are loaded with opaque financial products, and trading models that are flawed. ... Regulation is largely toothless if banks and their employees have the financial incentive to be reckless.

Sunday, 10 March 2013

Networks in finance

Just over a week ago, the journal Nature Physics published an unusual issue. In addition to the standard papers on technical physics topics, this issue contained a section with a special focus on finance, especially on complex networks in finance. I'm sure most readers of this blog won't have access to the papers in this issue, so I thought I'd give a brief summary of the papers here.

It's notable that these aren't papers written just by physicists, but represent the outcome of collaborations between physicists and a number of prominent economists (Nobel Prize winner Joseph Stiglitz among them) and several regulators from important central banks. The value of insight coming out of physics-inspired research into the collective dynamics of financial markets is really starting to be recognized by people who matter (even if most academic economists won't wake up to this probably for several decades).

I've written about this work in my most recent column for Bloomberg, which will be published on Sunday night EST. I was also planning to give here some further technical detail on one very important paper to which I referred in the Bloomberg article, but due to various other demands in the past few days I haven't quite managed that yet. The paper in question, I suspect, is unknown to almost all financial economists, but will, I hope, gain wide attention soon. It essentially demonstrates that the theorists' ideal of complete, arbitrage free markets in equilibrium isn't a nirvana of market efficiency, as is generally assumed. Examination of the dynamics of such a market, even within the neo-classical framework, shows that any approach to this efficient ideal also brings growing instability and likely market collapse. The ideal of complete markets, in other words, isn't something we should be aiming for. Here's some detail on that work from something I wrote in the past (see the paragraphs referring to the work of Matteo Marsili and colleagues).

Now, the Nature Physics special issue.

The first key paper is "Complex derivatives," by Stefano Battiston, Guido Caldarelli, Co-Pierre Georg, Robert May and Joseph Stiglitz. It begins by noting that the volume of derivatives outstanding fell briefly following the crisis of 2008, but is now increasing again. According to usual thinking in economics and finance, this growth of the market should be a good thing. If people are entering into these contacts, it must be for a reason, i.e. to hedge their risks or to exploit opportunities, and these deals should lead to beneficial economic exchange. But, as Battiston and colleagues note, this may not actually be true:
By engaging in a speculative derivatives market, players can potentially amplify their gains, which is arguably the most plausible explanation for the proliferation of derivatives in recent years. Needless to say, losses are also amplified. Unlike bets on, say, dice — where the chances of the outcome are not affected by the bet itself — the more market players bet on the default of a country, the more likely the default becomes. Eventually the game becomes a self-fulfilling prophecy, as in a bank run, where if each party believes that others will withdraw their money from the bank, it pays each to do so. More perversely, in some cases parties have incentives (and opportunities) to precipitate these events, by spreading rumours or by manipulating the prices on which the derivatives are contingent — a situation seen most recently in the London Interbank Offered Rate (LIBOR) affair.

Proponents of derivatives have long argued that these instruments help to stabilize markets by distributing risk, but it has been shown recently that in many situations risk sharing can also lead to instabilities.

The bulk of this paper is devoted to supporting this idea, examining several recent independent lines of research which indicate the more derivatives can make market less stable. This work shares some ideas with theoretical ecology, where it was once thought (40 years ago) that more complexity in an ecology should generally confer stability. Later work suggested instead that complexity (at least too much of it) tends to breed instability. According to a number of recent studies, the same seems to be true in finance:
It now seems that the proliferation of financial instruments induces strong fluctuations and instabilities for similar reasons. The basis for pricing complex derivatives makes several conventional assumptions that amount to the notion that trading activity does not feed back on the dynamical behaviour of markets. This idealized (and unrealistic) model can have the effect of masking potential instabilities in markets. A more detailed picture, taking into account the effects of individual trades on prices, reveals the onset of singularities as the number of financial instruments increases.
The remainder of the paper goes on to explore various means that may be taken, through regulations, to try to manage the complexity of the financial network and encourage its stability. Stability isn't something we should expect to occur on its own. It demands real attention to detail. Blind adherence to the idea that "more derivatives is good" is a recipe for trouble.

The second paper in the Nature Physics special issue is "Reconstructing a credit network," by Guido Caldarelli, Alessandro Chessa, Andrea Gabrielli, Fabio Pammolli and Michelangelo Puliga. This work addresses an issue that isn't quite as provocative as the value of the derivatives industry, but the topic may be of extreme importance in future efforts to devise effective financial regulations. The key insight coming from network science is that the architecture of a network -- its topology -- has a huge impact on how influences (such as financial distress) spread through the network. Hence, global network topology is intimately linked up with system stability; knowledge of global structure is absolutely essential to managing systemic risk. Unfortunately, the history of law and finance is such that much of the information that would be required to understand the real web of links between financial institutions remains private, hidden, unknown to the public or to regulators.

The best way to overcome this is certainly to make this information public. When  financial institutions undertake transactions among themselves, the rest of us are also influenced and our economic well being potentially put at risk. This information should be public knowledge, because it impacts upon financial stability, which is a public good. However, in the absence of new legislation to make this happen, regulators can right now turn to more sophisticated methods to help reconstruct a more complete picture of global financial networks, filling in the missing details. This paper, written by several key experts in this technical area, reviews what is now possible and how these methods might be best put to use by regulators in the near future.

Finally, the third paper in the Nature Physics special issue is "The power to control," by Marco Galbiati, Danilo Delpini and Stefano Battiston. "Control" is a word you rarely hear in the context of financial markets, I suppose because the near religion of the "free market" has made "control" seem like an idea of "communists" or at least "socialists" (whatever that means). But regulation of any sort, laws, institutions, even social norms and accepted practices, all of these represent some kind of "control" placed on individuals and firms in the aim, for society at large, of better outcomes. We need sensible control. How to achieve it?

Of course, "control" has a long history in engineering science where it is the focus of an extensive and quite successful "control theory." This paper reviews some recent work which has extended control theory to complex networks. One of the key questions is if the dynamics of large complex networks might be controlled, or at least strongly steered, by influencing only a small subset of the elements making up the network, and perhaps not even those that seem to be the most significant. This is, I think, clearly a promising area for further work. Let's take the insight of a century and more of control theory and ask if we can't use that to help prevent, or give early warnings of, the kinds of disasters that have hit finance in the past decade.

Much of the work in this special issue has originated out of a European research project with the code name FOC, which stands for, well, I'm not exactly sure what it stands for (the project describes itself as "Forecasting Financial Crises" which seems more like FFC to me). In any event, I know some of these people and apart from the serious science they have a nice sense of humor. Perhaps the acronym FOC was even chosen for another reason. As I recall, one of their early meetings a few years ago was announced as "Meet the FOCers." Humor in no way gets in the way of good science.

Friday, 8 March 2013

The intellectual equivalent of crack cocaine

That's what the British historian Geoffrey Elton once called Post-Modernist Philosophy, i.e. that branch of modern philosophy/literary criticism typically characterized by a, shall we say, less than wholehearted commitment to clarity and simplicity of expression. The genre is represented in the libraries by reams of apparently meaningless prose, the authors of which claim to get at truths that would otherwise be out of reach of ordinary language. Here's a nice example, the product of the subtle mind of one Felix Guattari:
“We can clearly see that there is no bi-univocal correspondence between linear signifying links or archi-writing, depending on  the author, and this multireferential, multi-dimensional machinic catalysis. The symmetry of scale, the transversality, the pathic non-discursive character of their expansion: all these dimensions remove us from the logic of the excluded middle and reinforce us in our dismissal of the ontological binarism we criticised previously.”
I'm with Elton. This writer, it seems to me, is up to no good, trying to pull the wool over the reader's eyes, using confusion as a weapon to persuade the reader of his superior insight. You read it, you don't quite get it (or even come close to getting it), and it is then tempting to conclude that whatever he is saying, as it is beyond your vision, must be exceptionally deep or subtle or complex, too much for you to grasp.

You need some self confidence to come instead to the other logically possible conclusion -- that the text is actually purposeful nonsense, all glitter and no content, an affront against the normal, productive use of language for communication, "crack cocaine" as the writer gets the high that comes from appearing deep and earning accolades without putting in the hard work to actually write something that is insightful.

Having said that, let me also say that I am not in any way an expert in postmodernist philosophy and there may be more to the thinking of some of its representatives than this Guattari quote would suggest.

In any event, I think there's something deeply similar here to John Kay's point in this essay about Warren Buffet. As he notes, Buffet has been spectacularly successful and hence the subject of vast media attention, yet, paradoxically, he doesn't seem to have inspired an army of investors who copy his strategy:
... the most remarkable thing about Mr Buffett’s achievement is not that no one has rivalled his record. It is that almost no one has seriously tried to emulate his investment style. The herd instinct is powerful, even dominant, among asset managers. But the herd is not to be found at Mr Buffett’s annual jamborees in Omaha: that occasion is attended only by happy shareholders and admiring journalists.
Buffet's strategy, as Kay describes, is a decidedly old-fashioned one based on close examination of the fundamentals of the companies in which he invests:
If he is a genius, it is the genius of simplicity. No special or original insight is needed to reach his appreciation of the nature of business success. Nor is it difficult to recognise that companies such as American Express, Coca-Cola, IBM, Wells Fargo, and most recently Heinz – Berkshire’s largest holdings – meet his criteria. ... Which leads back to the question of why Berkshire has so few imitators. After all, another crucial insight of business economics is that profitable strategies that can be replicated are imitated until returns from them are driven down to normal levels. Why do the majority of investment managers hold many more stocks, roll them over far more often, engage in far more complex transactions – and derive less consistent and profitable results?
The explanation, Kay suggests, is that Buffet's strategy also demands an awful lot of hard work and it's easier for many investment experts to follow the rather different strategy of Felix Guattari, not actually working to achieve superior insight, but working to make it seem as if they do, mostly by obscuring their actual strategies in a bewildering cloud of complexity. Sometimes, as in the case of Bernie Madoff, the obscuring complexity can even take the very simple form of essentially no information whatsoever. People who are willing to believe need very little help:
... the deeper issue is that complexity is intrinsic to the product many money managers sell. How can you justify high fees except by reference to frequent activity, unique insights and arcana? But Mr Buffett understands the limitations of his knowledge. That appreciation distinguishes people who are very clever from those who only think they are.
One final comment. I think finance is rife with this kind of psychological problem. But I do not at all believe that science is somehow immune from these effects. I've encountered plenty of works in physics and applied mathematics that couch their results in beautiful mathematics, demonstrate formidable skill in building a framework of theory, and yet seem utterly useless in actually solving or giving insight into any real problem. Science also has a weak spot for style over content.

Obscurity and simplicity

The British economist John Kay is one of my favorite sources of balanced and deeply insightful commentary on an extraordinary number of topics. I wish I could write as easily and productively as he does. He has a great post that is, in particular, on Warren Buffet, but is more generally on an intellectual affliction affecting finance whereby purposeful obscurity often wins out at the expense of honesty and simplicity. Well worth a read. BUT... I think this actually goes way beyond finance. It's part of the human condition... more on that tomorrow.... 

Wednesday, 20 February 2013

The housing market in pictures

A blogger named Irvine Renter (aka Larry Roberts) knows an awful lot about the housing market and writes about it here. He also produces (and links to) some great cartoons, like this one...



and this one...



and this one...
 
A photo on Flickr

and this one...

A photo on Flickr

Monday, 18 February 2013

A real model of Minsky

Noah Smith has a wonderfully informative post on the business cycle in economics. He's looking at the question of whether standard macroeconomic theories view the episodic ups and downs of the economy as the consequence of a real cycle, something arising from positive feed backs that drive persisting oscillations all on their own, or if they instead view these fluctuations as the consequence of external shocks to the system. As he notes, the tendency in macroeconomics has very much been the latter:
When things like this [cycles] happen in nature - like the Earth going around the Sun, or a ball bouncing on a spring, or water undulating up and down - it comes from some sort of restorative force. With a restorative force, being up high is what makes you more likely to come back down, and being low is what makes you more likely to go back up. Just imagine a ball on a spring; when the spring is really stretched out, all the force is pulling the ball in the direction opposite to the stretch. This causes cycles.

It's natural to think of business cycles this way. We see a recession come on the heels of a boom - like the 2008 crash after the 2006-7 boom, or the 2001 crash after the late-90s boom - and we can easily conclude that booms cause busts.

So you might be surprised to learn that very, very few macroeconomists think this! And very, very few macroeconomic models actually have this property.

In modern macro models, business "cycles" are nothing like waves. A boom does not make a bust more likely, nor vice versa. Modern macro models assume that what looks like a "cycle" is actually something called a "trend-stationary stochastic process" (like an AR(1)). This is a system where random disturbances ("shocks") are temporary, because they decay over time. After a shock, the system reverts to the mean (i.e., to the "trend"). This is very different from harmonic motion - a boom need not be followed by a bust - but it can end up looking like waves when you graph it...
I think this is interesting and deserves some further discussion. Take an ordinary pendulum. Give such a system a kick and it will swing for a time but eventually the motion will damp away. For a while, high now does portend low in the near future, and vice versa. But this pendulum won't start start swinging this way on its own, nor will it persist in swinging over long periods of time unless repeatedly kicked by some external force.

This is in fact a system of just the kind Noah is describing. Such a pendulum (taken in the linear regime) is akin to the AR(1) autoregressive process entering into macroeconomic models and it acts essentially as a filter on the source of shocks. The response of the system to a stream of random shocks can have a harmonic component, which can make the output look roughly like cycles as Noah mentioned. For an analogy, think of a big brass bell. This is a pendulum in the abstract, as it has internal vibratory modes that, once excited, damp way over time. Hang this bell in a storm and, as it receives a barrage of shocks, you'll hear a ringing that tells you more about the bell than it does the storm.

Still, to get really interesting cycles you need to go beyond the ordinary pendulum. You need a system capable of creating oscillatory behavior all on its own. In dynamical systems theory, this means a system with a limit cycle in its dynamics, which settles down in the absence of persisting perturbation to a cyclic behavior rather than to a fixed point. The existence of such a limit cycle generally implies that the system will have an unstable fixed point -- a state that seems superficially like an equilibrium, but which in fact will always dissolve away into cyclic behavior over time. Mathematically, this is the kind of situation one ought to think about when considering the possibility that natural instabilities drive oscillations in economics. Perhaps the equilibrium of the market is simply unstable, and the highs and lows of the business cycle reflect some natural limit cycle?

Noah mentions the work of Steve Keen, who has developed models along such lines. As far as I understand, these are generally low-dimensional models with limit cycle behavior and I expect they may be very instructive. But Noah also makes a good point that the data on the business cycle really doesn't show a clear harmonic signal at any one specific frequency. The real world is messier. An alternative to low dimensional models written in terms of aggregate economic variables is to build agent based models (of much higher dimension) to explore how natural human behavior such as trend following might lead to instabilities at least qualitatively like those we see.

For some recent work along these lines, take a look at this paper by Blake LeBaron which attempts to flesh out Hyman Minsky's well known story of inherent market instability in an agent based model. Here's the basic idea, as LeBaron describes it:
Minksy conjectures that financial markets begin to build up bubbles as investors become increasingly overconfident about markets. They begin to take more aggressive positions, and can often start to increase their leverage as financial prices rise. Prices eventually reach levels which cannot be sustained either by correct, or any reasonable forecast of future income streams on assets. Markets reach a point of instability, and the over extended investors must now begin to sell, and are forced to quickly deleverage in a fire sale like situation. As prices fall market volatility increases, and investors further reduce risky positions. The story that Minsky tells seems compelling, but we have no agreed on approach for how to model this, or whether all the pieces of the story will actually fit together. The model presented in this paper tries to bridge this gap. 
The model is in crude terms like many I've described earlier on this blog. The agents are adaptive and try to learn the most profitable ways to behave. They are also heterogeneous in their behavior -- some rely more on perceived fundamentals to make their investment decisions, while others follow trends. The agents respond to what has recently happened in the market, and then the market reality emerges out of their collective behavior. That reality, in some of the runs LeBaron explores, shows natural, irregular cycles of bubbles and subsequent crashes of the sort Minsky envisioned. The figure below, for example, shows data for the stock price, weekly returns and trading volume as they fluctuate over a 10 year period of the model:


Now, it is not surprising at all that one can make a computational model to generate dynamics of this kind. But if you read the paper, LeBaron has tried hard to choose the various parameters to fit realistically with what is known about human learning dynamics and the behavior of different kinds of market players. The model also does a good job in reproducing many of the key statistical features of financial time series including long range fundamental deviations, volatility persistence, and fat tailed return distributions. So it generates Minsky-like fluctuations in what is arguably a plausible setting (although I'm sure experts will quibble with some details).

To my mind, one particularly interesting point to emerge from this model is the limited ability of fundamentalist investors to control the unstable behavior of speculators. One nice feature of agent based models is that it's possible to look inside and examine all manner of details. For example, during these bubble phases, which kind of investor controls most of the wealth? As LeBaron notes,
The large amount of wealth in the adaptive strategy relative to the fundamental is important. The fundamental traders will be a stabilizing force in a falling market. If there is not enough wealth in that strategy, then it will be unable to hold back sharp market declines. This is similar to a limits to arbitrage argument. In this market without borrowing the fundamental strategy will not have sufficient wealth to hold back a wave of self-reinforcing selling coming from the adaptive strategies.   
Another important point, which LeBaron mentions in the paragraph above, is that there's no leverage in this model. People can't borrow to amplify investments they feel especially confident of. Leverage of course plays a central role in the instability mechanism described by Minsky, but it doesn't seem to be absolutely necessary to get this kind of instability. It can come solely from the interaction of different agents following distinct strategies.

I certainly don't mean to imply that these kinds of agent based models are superior to the low-dimensional modelling of Steve Keen and others. I think these are both useful approaches, and they ought to be complementary. Here's LeBaron's summing up at the end of the paper:
The dynamics are dominated by somewhat irregular swings around fundamentals, that show up as long persistent changes in the price/dividend ratio. Prices tend to rise slowly, and then crash fast and dramatically with high volatility and high trading volume. During the slow steady price rise, agents using similar volatility forecast models begin to lower their assessment of market risk. This drives them to be more aggressive in the market, and sets up a crash. All of this is reminiscent of the Minksy market instability dynamic, and other more modern approaches to financial instability.

Instability in this market is driven by agents steadily moving to more extreme portfolio positions. Much, but not all, of this movement is driven by risk assessments made by the traders. Many of them continue to use models with relatively short horizons for judging market volatility. These beliefs appear to be evolutionarily stable in the market. When short term volatility falls they extend their positions into the risky asset, and this eventually destabilizes the market. Portfolio composition varying from all cash to all equity yields very different dynamics in terms of forced sales in a falling market. As one moves more into cash, a market fall generates natural rebalancing and stabilizing purchases of the risky asset in a falling market. This disappears as agents move more of their wealth into the risky asset. It would reverse if they began to leverage this position with borrowed money. Here, a market fall will generate the typical destabilizing fire sale behavior shown in many models, and part of the classic Minsky story. Leverage can be added to this market in the future, but for now it is important that leverage per se is not necessary for market instability, and it is part of a continuum of destabilizing dynamics.

Tuesday, 12 February 2013

Edmund Phelps trashes rational expectations

I'm not generally one to enjoy reading interviews with macroeconomists, but this one is an exception. Published yesterday in Bloomberg, it features an interview by Caroline Baum of Edmund Phelps, Nobel Prize winner for his work on the relationship between inflation and unemployment. This focus of the interview is on Phelp's views of the rational expectations revolution. He is not a big fan:
Q (Baum): So how did adaptive expectations morph into rational expectations?

A (Phelps): The "scientists" from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let's be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.

Q: And what's the consequence of this putsch?

A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted -- thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.

Q: So rather than live with variability, write a formula in stone!

A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.
I think this is exactly the issue: "fear of uncertainty". No science can be effective if it aims to banish uncertainty by theoretical fiat. And this is what really makes rational expectations economics stand out as crazy when compared to other areas of science and engineering. It's a short interview, well worth a quick read.

Highly ironic also that, nearly half a century after Lucas and others began pushing this stuff, the trend is now back toward "adaptive expectations." Is rational expectations anything other than an expensive 50 year diversion into useless nonsense?

Sunday, 10 February 2013

Let there be light

*** UPDATE BELOW***

My Bloomberg column this month looks at an idea for improving the function of the interbank lending market. The idea is radical, yet also conceptually very simple. It's radical because it proposes a complete transformation of banking transparency. It shows how transparency may be the best route to achieving overall banking stability and efficiency. It will be interesting to see what free-market ideologues think of the idea, as it doesn't fit into any standard ideological narrative such as "get government out of the way" or "unregulated markets work best." It offers a means to improve market function -- something one would expect free-market cheerleaders to favor -- but does so in a way that threatens banking secrecy and also involves some central coordination.

My column was necessarily vague on detail given its length, so let me give some more discussion here. The paper presenting the ideas is this one by Stefan Thurner and Sebastian Poledna. It is first important to recognize that they consider here not all markets, but specifically the interbank market, in which banks loan funds to one another to manage demands for liquidity. This is the market that famously froze up following the collapse of Lehman Brothers, as banks suddenly realized they had no understanding at all of the risks facing potential counterparties. The solution proposed by Thurner and Poledna strikes directly at such uncertainty, by offering a mechanism to calculate those risks and make them available to everyone.

Here's the basic logic of their argument. They start with the point that standard theories of finance operate on the basis of wholly unrealistic assumptions about the ability of financial institutions to assess their risks rationally. Even if the individuals at these institutions were rational, the overwhelming complexity of today's market makes it impossible to judge systemic risks because of a lack of information:
Since the beginning of banking the possibility of a lender to assess the riskiness of a potential borrower has been essential. In a rational world, the result of this assessment determines the terms of a lender-borrower relationship (risk-premium), including the possibility that no deal would be established in case the borrower appears to be too risky. When a potential borrower is a node in a lending-borrowing network, the node’s riskiness (or creditworthiness) not only depends on its financial conditions, but also on those who have lending-borrowing relations with that node. The riskiness of these neighboring nodes depends on the conditions of their neighbors, and so on. In this way the concept of risk loses its local character between a borrower and a lender, and becomes systemic.

The assessment of the riskiness of a node turns into an assessment of the entire financial network [1]. Such an exercise can only carried out with information on the asset-liablilty network. This information is, up to now, not available to individual nodes in that network. In this sense, financial networks – the interbank market in particular – are opaque. This intransparency makes it impossible for individual banks to make rational decisions on lending terms in a financial network, which leads to a fundamental principle: Opacity in financial networks rules out the possibility of rational risk assessment, and consequently, transparency, i.e. access to system-wide information is a necessary condition for any systemic risk management.
In this connection, recall Alan Greenspan's famous admission that he had trusted in the ability of rational bankers to keep markets working by controlling their counterparty risk. As he exclaimed in 2006,
"Those of us who have looked to the self-interest of lending institutions to protect shareholder's equity -- myself especially -- are in a state of shocked disbelief."
The trouble, at least partially, is that no matter how self-interested those lending institutions were, they couldn't possibly have made the staggeringly complex calculations required to assess those risks accurately. The system is too complex. They lacked necessary information. Hence, as Thurner and Poledna point out, we might help things by making this information more transparent.

The question then becomes: is it possible to do this? Well, here's one idea. As the authors point out, much of the information that would be required to compute the systemic risks associated with any one bank is already reported to central banks, at least in developed nations. No private party has this information. No single investment bank has this information. Perhaps even no single central bank has this information. But central banks together do have it, and they could use it to perform a calculation of considerably value (again, in the context of the interbank market):
In most developed countries interbank loans are recorded in the ‘central credit register’ of Central Banks, that reflects the asset-liability network of a country [5]. The capital structure of banks is available through standard reporting to Central Banks. Payment systems record financial flows with a time resolution of one second, see e.g. [6]. Several studies have been carried out on historical data of asset-liability networks [7–12], including overnight markets [13], and financial flows [14].

Given this data, it is possible (for Central Banks) to compute network metrics of the asset-liability matrix in real-time, which in combination with the capital structure of banks, allows to define a systemic risk-rating of banks. A systemically risky bank in the following is a bank that – should it default – will have a substantial impact (losses due to failed credits) on other nodes in the network. The idea of network metrics is to systematically capture the fact, that by borrowing from a systemically risky bank, the borrower also becomes systemically more risky since its default might tip the lender into default. These metrics are inspired by PageRank, where a webpage, that is linked to a famous page, gets a share of the ‘fame’. A metric similar to PageRank, the so-called DebtRank, has been recently used to capture systemic risk levels in financial networks [15].
I wrote a little about this DebtRank idea here. It's a computational algorithm applied to a financial network which offers a means to assess systemic risks in a coherent, self-consistent way; it brings network effects into view. The technical details aren't so important, but the original paper proposing the notion is here. The important thing is that the DebtRank algorithm, along with the data provided to central banks, makes it possible in principle to calculate a good estimate of the overall systemic risk presented by any bank in the network. 

So imagine this: central banks around the world get together tomorrow, and within a month or so manage to coordinate their information flows (ok, maybe that's optimistic). They set up some computers to run the calculations, and a server to host the results, which would be updated every day, perhaps even hourly. Soon you or I or any banker in the world could go to some web site and in a few seconds read out the DebtRank score for any bank in the developed world, and also to see the listing of banks ranked by the systemic risks they present. Wouldn't that be wonderful? Even if central banks took no further steps, this alone would be a valuable project, using global information to produce a globally valuable information resource for everyone. (Notice also that publishing DebtRank scores isn't the same as publishing all the data that banks supply to their central banks. That level of detail could be kept secret, with only the single measure of overall systemic risk being published.)

I would hope that almost everyone would be in favor of such a project or something like it. Of course, one group would be dead set against it -- those banks who have the highest DebtRank scores because they are systemically the most risky. But their private concerns shouldn't trump the public interest in reducing such risks. Measuring and making such numbers public is the first step in making any such reduction.

But Thurner and Poledna do go further in making a specific proposal for using this information in a sensible way to reduce systemic risks. Here's how that works. Banks in the interbank market borrow funds for varying amounts of time from other banks. Typically, there's a standard interbank interest rate prevailing for all banks (as far as I understand). Hence, a bank looking to borrow doesn't really have much preference as to which bank it finds as a lender; the interest paid will be the same. But if the choice of lender doesn't matter to the borrowing bank, it does matter a lot to the rest of us, as borrowing from a systemically risky bank threatens the financial system. If the borrower can't pay back, that risky bank could be put into distress and cause trouble for the system at large. So, we really should have banks in the interbank market looking to borrow first from the least systemically risky banks, i.e. from those with low values of DebtRank.

This is what Thurner and Poledna propose. Let central banks regulate that borrowers in the interbank market do just that -- seek out the least risky banks first as lenders. In this way, banks acting to take on lots of systemic risk would thereby be marked as too dangerous to make further loans. Further borrowing would instead be undertaken by less risky banks, thereby improving the spread of risks across the system. Don't trust in the miracle of the free market to make this happen -- it won't -- but step in and provide a mechanism for it to happen. As the authors describe it:
The idea is to reduce systemic risk in the IB network by not allowing borrowers to borrow from risky nodes. In this way systemically risky nodes are punished, and an incentive for nodes is established to be low in systemic riskiness. Note, that lending to a systemically dangerous node does not increase the systemic riskiness of the lender. We implement this scheme by making the DebtRank of all banks visible to those banks that want to borrow. The borrower sees the DebtRank of all its potential lenders, and is required (that is the regulation part) to ask the lenders for IB loans in the order of their inverse DebtRank. In other words, it has to ask the least risky bank first, then the second risky one, etc. In this way the most risky banks are refrained from (profitable) lending opportunities, until they reduce their liabilities over time, which makes them less risky. Only then will they find lending possibilities again. This mechanism has the effect of distributing risk homogeneously through the network.
The overall effect in the interbank market would be -- in an idealized model, at least -- to make systemic banking collapses much less likely. Thurner and Poledna ran a number of agent-based simulations to test out the dynamics of such a market, with encouraging results. The model involves banks, firms and households and their interactions; details in the paper for those interested. Bottom line, as illustrated in the figure below, is that cascading defaults through the banking system become much less likely. Here the red shows the statistical likelihood over many runs of banking cascades of varying size (number of banks involved) when borrowing banks choose their counterparties at random; this is the "business as usual" situation, akin to the market today. In contrast, the green and blue show the same distribution if borrowers instead sought counterparties so as to avoid those with high values of DebtRank (green and blue for slightly different conditions). Clearly, system wide problems become much less likely.



Another way to put it is this: banks currently have no incentive whatsoever, when seeking to borrow, to avoid borrowing from banks that play systemically important roles in the financial system. They don't care who they borrow from. But we do care, and we could easily force them -- using information that is already collected -- to borrow in a more responsible, safer way. Can there be any argument against that?

What are the chances for such a sensible idea to be put into practice? I have no idea. But I certainly hope these ideas make it quickly onto the radar of people at the new Office for Financial Research.

**UPDATE**
A reader emailed to alert me to this blog he runs which champions the idea of "ultra transparency" as a means for ensuring greater stability in finance. At face value, it makes a lot of sense and I think that the work I wrote about today fits into this perspective very well. The idea is simply that governments can provide information resources to the markets which would support better decisions by everyone in reckoning risks and rewards. Of course, we need independent people and firms gathering information in a decentralized way. But that isn't enough. In today's hypercomplex markets, some of the risks are simply invisible to anyone lacking vast quantities of data and the means to analyze them. Only governments currently have the requisite access to such information.  

Friday, 25 January 2013

Is finance different than medicine?

Readers of this blog know that I generally write about the risks of finance -- risks I think are systematically underestimated by standard economic theories. But finance does of course bring many benefits, mostly when used in its proper role as a means to insure against risks. I just happened on this terrific paper (now nearly a year old, not breaking news) by two legal scholars from the University of Chicago who propose the idea of an FDA for finance -- a body that would be charged with approving new financial products before they would enter the market. It's a sensible proposal, especially given the potential risks associated with financial products, which are certainly comparable to the risks of new pharmaceutical products.

But beyond this specific idea, the paper gives a great discussion of the two fundamentally opposing uses of derivatives and other new financial products -- in acting as insurance, to spread risks in a socially beneficial way, or to act as mechanism for gambling, to increase the risks faced by certain parties in a way that does not bring social benefits. The following two paragraphs give the flavor of the argument (which the full paper explores in great detail):
Financial products are socially beneficial when they help people insure risks, but when these same products are used for gambling they can instead be socially detrimental. The difference between insurance and gambling is that insurance enables people to reduce the risk they face, whereas gambling increases it. A person who purchases financial products in order to insure themselves essentially pays someone else to take a risk on her behalf. The counterparty is better able to absorb the risk, typically because she has a more diversified investment portfolio or owns assets whose value is inversely correlated with the risk taken on. By contrast, when a person gambles, that person exposes herself to increased net risk without offsetting a risk faced by a counterparty: she merely gambles in hopes of gaining at the expense of her counterparty or her counterparty’s regulator. As we discuss below, gambling may have some ancillary benefits in improving the information in market prices. However, it is overwhelmingly a negative-sum activity, which, in the aggregate, harms the people who engage in it, and which can also produce negative third-party effects by increasing systemic risk in the economy.

This basic point has long been recognized, but has had little influence on modern discussions of financial regulation. Before the 2008 financial crisis, the academic and political consensus was that financial markets should be deregulated. This consensus probably rested on pragmatic rather than theoretical considerations: the U.S. economy had grown enormously from 1980 to 2007, and this growth had taken place at the same time as, and seemed to be connected with, the booming financial sector, which was characterized by highly innovative financial practices. With the 2008 financial crisis, this consensus came to an end, and since then there has been a significant retrenchment, epitomized by the passage of the Dodd-Frank Act, which authorizes regulatory agencies to impose significant new regulations on the financial industry.
Of course, putting such a thing into practice would be difficult. But it's also difficult to clean up the various messes that occur when financial products lead to unintended consequences. Difficult isn't an argument against doing something worthwhile. The paper I mentioned makes considerable effort to explore how the insurance benefits and the gambling costs associated with a new instrument might be estimated. And maybe a direct analogy to the FDA isn't the right thing at all. Can a panel of experts estimate the costs/benefits accurately? Maybe not. But there might be sensible ways to bring certain new products under scrutiny once they have been put into wide use. And products needn't be banned either -- merely regulated and their use reviewed to avoid the rapid growth of large systemic risks. Of course, steps were taken in the late 1990s (by Elizabeth Warren, most notably) to regulate derivatives markets much more closely. Those steps were quashed by the finance industry through the action of Larry Summers, Alan Greenspan and others. Had there been something like an independent FDA-like body for finance, things might have turned out less disastrously. 

Forbes (predictably) came out strongly against this idea when it was published (over a year ago), with the argument that it goes against the tried and true Common Law notion that "anything not specifically banned is allowed." But that's just the point. We specifically don't allow Bayer or GlaxoSmithKline create and market new drugs without having extensive tests to give some confidence in their safety. Not only that, once approved those drugs can only be sold in packages containing extensive warnings of their risks. Why should finance be different?

Monday, 14 January 2013

Peter Howitt... beyond equilibrium

Mostly on this blog I've been arguing that current economic theory suffers from an obsession with equilibrium concepts, especially in macroeconomics, or in models of financial markets. Most of the physical and natural world is out of equilibrium, driven by forces that are out of balance. Things never happen -- in the oceans or atmosphere, in ecosystems, in the Earth's crust, in the human body -- because of equilibrium balance. Change always comes because of disequilibrium imbalance. If you want to understand the dynamics of almost anything, you need to think outside of equilibrium.

This is actually an obvious point, and in science outside of economics people generally don't even talk about disequilibrium, but about dynamics; it's the same thing. Equilibrium means no dynamics, rest, stasis. It can't teach you about how things change. But we do care very much about how things change in finance and economics, and so we need models exploring economic systems out of equilibrium. Which means models beyond current economics.

The need for disequilibrium economics was actually well accepted back in the 1930s and 40s by economists such as Irving Fisher in the US and Nicholas Kaldor in England. Then in the 1950s, with the Arrow-Debreu results, and later with the whole Rational Expectations hysteria, it seems to have been forgotten. It's curious I think that really good economists, clear thinking people who are trying to address real world issues, often have no choice but to try to understand episodes of dramatic change (bank runs, bubbles, liquidity crises, leverage cycles) by torturing equilibrium models into some form that reflects these things. The famous Diamond-Dybvig model of bank runs is a good example. The model is one with multiple equilibria, one of which is a bank run. This is indeed insightful and useful, essentially showing that some sharp break can occur in the behaviour of the system, and also offering some suggestions about how runs might be avoided with certain kinds of banking contracts. But isn't it at least a little strange to think of a bank run, a dynamic event driven by amplification and contagion of behaviour, as an "equilibrium"?

I'm not alone in thinking that it is a little strange. Indeed, by way of this excellent collection of papers maintained by Leigh Testfatsion, I recently came across an excellent short essay by economist Peter Howitt which makes arguments along similar lines, but in the area of macroeconomics. The whole essay is worth reading, much of it describing in practical terms how, in his view, central banks have in recent decades moved well ahead of macroeconomic theorists in learning how to manage economies, often using tactics with no formal backing in macroeconomic theory. Theory is struggling to keep up, which is probably not surprising. Toward the end, Howitt makes more explicit arguments about the need for disequilibrium in macroeconomics:
The most important task of monetary policy is surely to help avert the worst outcomes of macroeconomic instability – prolonged depression, financial panics and high inflations. And it is here that central banks are most in need of help from modern macroeconomic theory. Central bankers need to understand what are the limits to stability of a modern market economy, under what circumstances is the economy likely to spin out of control without active intervention on the part of the central bank, and what kinds of policies are most useful for restoring macroeconomic stability when financial markets are in disarray.

But it is also here that modern macroeconomic theory has the least to offer. To understand how and when a system might spin out of control we would need first to understand the mechanisms that normally keep it under control. Through what processes does a large complex market economy usually manage to coordinate the activities of millions of independent transactors, none of whom has more than a glimmering of how the overall system works, to such a degree that all but 5% or 6% of them find gainful unemployment, even though this typically requires that the services each transactor performs be compatible with the plans of thousands of others, and even though the system is constantly being disrupted by new technologies and new social arrangements? These are the sorts of questions that one needs to address to offer useful advice to policy makers dealing with systemic instability, because you cannot know what has gone wrong with a system if you do not know how it is supposed to work when things are going well.

Modern macroeconomic theory has turned its back on these questions by embracing the hypothesis of rational expectations. It must be emphasized that rational expectations is not a property of individuals; it is a property of the system as a whole. A rational expectations equilibrium is a fixed point in which the outcomes that people are predicting coincide (in a distributional sense) with the outcomes that are being generated by the system when they are making these predictions. Even blind faith in individual rationality does not guarantee that the system as a whole will find this fixed point, and such faith certainly does not help us to understand what happens when the point is not found. We need to understand something about the systemic mechanisms that help to direct the economy towards a coordinated state and that under normal circumstances help to keep it in the neighborhood of such a state.

Of course the macroeconomic learning literature of Sargent (1999), Evans and Honkapohja (2001) and others goes a long way towards understanding disequilibrium dynamics. But understanding how the system works goes well beyond this. For in order to achieve the kind of coordinated state that general equilibrium analysis presumes, someone has to find the right prices for the myriad of goods and services in the economy, and somehow buyers and sellers have to be matched in all these markets. More generally someone has to create, maintain and operate markets, holding buffer stocks of goods and money to accommodate other transactors’ wishes when supply and demand are not in balance, providing credit to deficit units with good investment prospects, especially those who are maintaining the markets that others depend on for their daily existence, and performing all the other tasks that are needed in order for the machinery of a modern economic system to function.

Needless to say, the functioning of markets is not the subject of modern macroeconomics, which instead focuses on the interaction between a small number of aggregate variables under the assumption that all markets clear somehow, that matching buyers and sellers is never a problem, that markets never disappear because of the failure of the firms that were maintaining them, and (until the recent reaction to the financial crisis) that intertemporal budget constraints are enforced costlessly. By focusing on equilibrium allocations, whether under rational or some other form of expectations, DSGE models ignore the possibility that the economy can somehow spin out of control. In particular, they ignore the unstable dynamics of leverage and deleverage that have devastated so many economies in recent years.

In short, as several commentators have recognized, modern macroeconomics involves a new ‘‘neoclassical synthesis,’’ based on what Clower and I (1998) once called the ‘‘classical stability hypothesis.’’ It is a faith-based system in which a mysterious unspecified and unquestioned mechanism guides the economy without fail to an equilibrium at all points in time no matter what happens. Is there any wonder that such a system is incapable of guiding policy when the actual mechanisms of the economy cease to function properly as credit markets did in 2007 and 2008?
Right on, in my opinion, although I think Peter is perhaps being rather too kind to the macroeconomic learning work, which it seems to me takes a rather narrow and overly restricted perspective on learning, as I've mentioned before. At least it is a small step in the right direction. We need bigger steps, and more people taking them. And perhaps a radical and abrupt defunding of traditional macroeconomic research (theory, not data, of course, and certainly not history) right across the board. The response of most economists to critiques of this kind is to say, well, ok, we can tweak our rational expectations equilibrium models to include some of this stuff. But this isn't nearly enough.

Peter's essay finishes with an argument as to why computational agent based models offer a much more flexible way to explore economic coordination mechanisms in macroeconomics on a far more extensive basis. I cannot see how this approach won't be a huge part of the future of macroeconomics, once the brainwashing of rational expectations and equilibrium finally loses its effect. 

Steve Keen on "bad weathermen"

I've made quite a lot of the analogy between the dynamics of an economy or financial market and the weather. It's one of the basic themes of this blog, and the focus of my forthcoming book FORECAST. I don't pretend to be the first one to think of this at all. I know that the head of the Bank of England Mervyn King has talked about this analogy in the past, as have many others.

But the idea now seems to be gathering more popularity. Steve Keen even writes here specifically about the task of economic forecasting, and the entirely different approaches used on weather science, where forecasting is now quite successful, and in economics, where it is not:
Conventional economic modelling tools can extrapolate forward existing trends fairly well – if those trends continue. But they are as hopeless at forecasting a changing economic world as weather forecasts would be, if weather forecasters assumed that, because yesterday’s temperature was 29 degrees Celsius and today’s was 30, tomorrow’s will be 31 – and in a year it will be 395 degrees.

Of course, weather forecasters don’t do that. When the Bureau of Meteorology forecasts that the maximum temperature in Sydney on January 16 to January 19 will be respectively 29, 30, 35 and 25 degrees, it is reporting the results of a family of computer models that generate a forecast of future weather patterns that is, by and large, accurate over the time horizon the models attempt to predict – which is about a week.
Weather forecasts have also improved dramatically over the last 40 years – so much so that even an enormous event like Hurricane Sandy was predicted accurately almost a week in advance, which gave people plenty of time to prepare for the devastation when it arrived:

Almost five days prior to landfall, the National Hurricane Center pegged the prediction for Hurricane Sandy, correctly placing southern New Jersey near the centre of its track forecast. This long lead time was critical for preparation efforts from the Mid-Atlantic to the Northeast and no doubt saved lives.

Hurricane forecasting has come a long way in the last few decades. In 1970, the average error in track forecasts three days into the future was 518 miles. That error shrunk to 345 miles in 1990. From 2007-2011, it dropped to 138 miles. Yet for Sandy, it was a remarkably low 71 miles, according to preliminary numbers from the National Hurricane Center.

Within 48 hours, the forecast came into even sharper focus, with a forecast error of just 48 miles, compared to an average error of 96 miles over the last five years.

Meteorological model predictions are regularly attenuated by experienced meteorologists, who nudge numbers that experience tells them are probably wrong. But they start with a model of the weather than is fundamentally accurate, because it is founded on the proposition that the weather is unstable.

Conventional economic models, on the other hand, assume that the economy is stable, and will return to an 'equilibrium growth path' after it has been dislodged from it by some 'exogenous shock'. So most so-called predictions are instead just assumptions that the economy will converge back to its long-term growth average very rapidly (if your economist is a Freshwater type) or somewhat slowly (if he’s a Saltwater croc).

Weather forecasters used to be as bad as this, because they too used statistical models that assumed the weather was in or near equilibrium, and their forecasts were basically linearly extrapolations of current trends.
How did weather forecasters get better? By recognizing, of course, the inherent role of positive feed backs and instabilities in the atmosphere, and by developing methods to explore and follow the growth of such instabilities mathematically. That meant modelling in detail the actual  fine scale workings of the atmosphere and using computers to follow the interactions of those details. The same will almost certainly be true in economics. Forecasting will require both lots of data and also much more detailed models of the interactions among people, firms and financial institutions of all kinds, taking the real structure of networks into account, using real data to build models of behaviour and so on. All this means giving up tidy analytical solutions, of course, and even computer models that insist the economy must exist in a nice tidy equilibrium. Science begins by taking reality seriously.