Category Archives: Financial crisis

Are Macroprudential Tools as Caring and Forethinking as They Claim to Be? Financial Stability and Monetary Policy in the Long Run

An Historical Perspective on the Quest for Financial Stability and the Monetary Policy Regime

By Michael D. Bordo (Rutgers University)

Abstract: This paper surveys the co-evolution of monetary policy and financial stability for a number of countries across four exchange rate regimes from 1880 to the present. I present historical evidence on the incidence, costs and determinants of financial crises, combined with narratives on some famous financial crises. I then focus on some empirical historical evidence on the relationship between credit booms, asset price booms and serious financial crises. My exploration suggests that financial crises have many causes, including credit driven asset price booms, which have become more prevalent in recent decades, but that in general financial crises are very heterogeneous and hard to categorize. Two key historical examples stand out in the record of serious financial crises which were linked to credit driven asset price booms and busts: the 1920s and 30s and the Global Financial Crisis of 2007-2008. The question that arises is whether these two ‘perfect storms’ should be grounds for permanent changes in the monetary and financial environment.

URL: https://EconPapers.repec.org/RePEc:nbr:nberwo:24154

Distributed by NEP-HIS on: 2018-01-15

Review by: Sergio Castellanos-Gamboa (Bangor University)

Summary 

In this paper Michael Bordo presents empirical historical evidence to analyze the incidence of credit-driven asset price booms and the extent to which they cause deep financial crises. The main argument of the paper is that we have to consider very carefully whether monetary policy should suffer a structural transformation whenever a rare “perfect storm” event occurs. Bordo supports this argument by looking at the correlation between and possible causality from credit-driven asset price booms to financial crises. The relation, he argues, is rather weak. Nonetheless, the consequences of implementing restrictive monetary policies when these events happen can be significantly bad in the long run.

The paper begins by reviewing the historical evolution of monetary and financial stability policy. In section 2 the author summarizes the appearance of central banks and the evolution of their functions and responsibilities, mainly as lender of last resort (LLR), across five diBordosq.jpgfferent periods: the “Classical Gold Standard”, the “Interwar and World War II”, the “Bretton Woods” period between 1944-1973, the “Managed Float Regime” between 1973-2006, and the “Global Financial Crisis”.

The next section of the paper deals with the measurement of financial crises in historical perspective. It starts by clarifying the definition of financial crises and looking at how this definition has changed from describing a banking panic to include “too important to fail” institutions, currency crises, sovereign debt crises, credit-driven asset price booms, sudden stops, and contagions. Of these crises, Bordo identifies five of them as global: 1890-1891, 1907-1908, 1913-1914, 1931-1932, and 2007-2008. He then turns to report the output losses of those global financial crises, using the cumulative percentage deviation of GDP per capita from the pre-crisis trend level of per capita GDP. He finds that in “the pre-1914 era the losses ranged from 3% to 6% of GDP. For the interwar period, driven by the Great Depression they are much larger – 40%. In the post Bretton Woods period losses are smaller than the interwar but larger than under the gold standard”. Finally, he finds that output losses in the period after 1997 are larger than in the pre-1914 period. The author ends this section by analysing the determinants of financial crises. Using a meta-study he concludes that financial crises are quite heterogeneous, and no particular factor stands out as a main determinant for their occurrence.

Section 4 of the paper reviews the historical narrative of a subset of 12 cases to evaluate the extent to which credit-driven asset price booms have been an important cause of financial crises. Bordo argues that although after the 2007-2008 crisis this factor has become more relevant, this was not the case before the collapse of Bretton Woods, with a few exceptions before World War II. Section 5 looks deeper into the relationship among credit booms, asset price booms, and financial crises using a business cycle methodology with a sample of 15 advanced countries from 1880 onwards. Once again, there is evidence that “suggests that the coincidence between credit boom peaks and serious financial crises is quite rare”. Moreover, credit booms do not seem to be highly correlated with asset price booms (except for the Great Depression and the Global Financial Crisis).

 The paper concludes by stating that there are four key principles to be followed to have a stable monetary policy regime that can be compatible with financial stability. These are: price stability, real macro stability, a credible rules-based LLR, and sound financial supervision and regulation and banking structure. These principles do not suggest that financial stability has to be elevated to the same level of importance as price stability or macroeconomic stability, and that implementing macroprudential tools to restrict monetary policy after a “perfect storm” can be more dangerous than beneficial in the long run.

Comments 

This paper brings important elements to the debate of whether implementing macroprudential tools is the right path to achieve financial stability. Moreover, Bordo raises a critical question that has not been properly addressed in the literature. To what extent can macroprudential tools be harmful for long-run economic growth? Additionally, the author invites us to question whether central banks should undertake activities that go beyond monetary policy (as bailing out failing institutions) to the point of putting at risk their credibility and even their independence, as it has already happened in the past.

Once again economic history becomes relevant to understand and shed a new light to contemporary debates. In particular, this paper implements a transparent and simple methodology to analyze whether credit-driven asset price booms can cause financial crises and if monetary policy should be fundamentally transformed when financial markets are hit by a “perfect storm”. The author is quite skeptical of the implementation of restrictive monetary policies to deal with serious financial crises, although there is still considerable room for more research to clarify this debate. Even though Bordo avoids using econometrics to assess this issue, the methodology proposed in this paper can still be subject to the Lucas critique (Lucas 1976). Therefore, there is still the need for a robust methodology that can provide evidence to produce a sound and testable economic theory to thoroughly study and understand this phenomenon.

More important, we still have to ask whether we can differentiate real productivity booms from bubbles. If there is still a lack of knowledge in this area we will not be able to know if we have the appropriate tools to diagnose a bubble and defuse an asset price boom before it bursts. Therefore, we cannot state for sure whether central banks should follow the Greenspan doctrine (Bernanke and Getler 2001), or if they should be more proactive in the procurement of financial stability. Even more and following the main argument of the paper, it is very important to ask and understand if financial stability should be granted the same importance as price stability or the stability of the real macroeconomy. For now, the answer seems to be no, but there also seems to be sufficient evidence to argue that banking should be made boring again (Krugman 2009).

References

Bernanke, Ben and Mark Gertler (2001). “Should Central Banks Respond to Movements in Asset Prices?” American Economic Review91(2), 253-257.

Krugman, Paul (2009). “Making banking boring.” New York Times, April 10.

Lucas, Robert (1976). “Econometric Policy Evaluation: A Critique.” In Allan H. Meltzer and Karl Brunner. The Phillips Curve and Labor Markets. Carnegie-Rochester Conference Series on Public Policy. 1. New York: American Elsevier, 19–46.

Advertisements

The Elephant (-Shaped Curve) in the Room: Economic Development and Regional Inequality in South-West Europe

The Long-term Relationship Between Economic Development and Regional Inequality: South-West Europe, 1860-2010

by Alfonso Díez-Minguela (Universitat de València); Rafael González-Val (Universidad de Zaragoza, IEB); Julio Martinez-Galarraga (Universitat de València); María Teresa Sanchis (Universitat de València); and Daniel A. Tirado (Universitat de València).

Abstract: This paper analyses the long-term relationship between regional inequality and economic development. Our data set includes information on national and regional per-capita GDP for four countries: France, Italy, Portugal and Spain. Data are compiled on a decadal basis for the period 1860-2010, thus enabling the evolution of regional inequalities throughout the whole process of economic development to be examined. Using parametric and semiparametric regressions, our results confirm the rise and fall of regional inequalities over time, i.e. the existence of an inverted-U curve since the early stages of modern economic growth, as the Williamson hypothesis suggests. We also find evidence that, in recent decades, regional inequalities have been on the rise again. As a result, the long-term relationship between national economic development and spatial inequalities describes an elephant-shaped curve.

URL: https://EconPapers.repec.org/RePEc:hes:wpaper:0119

Distributed by NEP-HIS on 2018-02-26

Review by: Anna Missiaia

The relationship between economic development and inequality in a broad sense has been at the core of economic research for decades. In particular, the process of industrialization has been much investigated as a driver of inequality: Kuznets (1955) was the first to propose an inverted U-shaped pattern of income inequality driven by the initial forging ahead of the small high-wage industrial sector and a subsequent structural change, with more and more labour force moving out of agriculture into industry. The first to suggest that a similar pattern could take place in the spatial dimension was Williamson (1965), who showed that the process of industrialization could lead to an upswing of regional inequality because of the initial spatial concentration of the industrial sector, which eventually touches the less advanced regions. The paper by Díez-Minguela, González-Val, Martinez-Galarraga, Sanchis and Tirado circulated on NEP-HIS on 2018-02-26 deals with this latter inequality. The authors formally test what is the relationship between the coefficient of variation (in its Williamson formulation) of regional GDP per capita and a set of measures of economic development, most importantly the level of national GDP per capita. The authors use for the analysis four Southwestern European countries (France, Spain, Italy and Portugal).  The paper starts in 1860 and therefore takes a much appreciated multi-country and long-run perspective compared to the original work by Williamson, who was looking only at the 20th century United States.

The work by Díez-Minguela and co-authors also relies on the framework developed by Barrios and Strobl (2009), going from a merely descriptive interpretation of an inverted U-shape of regional inequality to a theoretically-founded one. In particular, Barrios and Strobl (2009) use a growth model that takes into account region-specific technological shocks and their later diffusion on the entire national territory; they also include measures of trade openness to test the hypothesis that more market integration leads to more regional inequality; they finally consider regional policies implemented by the State to even out regional disparities. The original paper by Barrios and Strobl (2009) was only considering a sample of countries from 1975 onwards, basically overlooking the whole post-WWII industrial boom in some more developed countries. In this respect, the contribution by Díez-Minguela and coauthors is fundamental, as it proposes a long-run regional analysis not only confined to one specific country as it is customary in the field, but on a group of countries. The paper also proposes a formal testing of the drivers of regional inequality, moving forward from a mere descriptive approach. In terms of methodology, the authors propose an approach that makes use of both parametric and semi-parametric estimations. This is to take into account that the relationship might be different for different levels of GDP.

Moving on to the results, the first thing to note is that three out of four countries in the sample present an inverted U-shaped pattern between GDP per capita and regional inequality (as can be seen in Figure 1).

fig426march2018

Figure 1: Regional Income Dispersion and Per-Capita GDP in France, Italy, Spain and Portugal (1860-2010). Source: Díez Minguela et al. (2017)

As for France, the authors suggest that the lack of a U-shaped pattern could be due to its early industrialization that pre-dates the first benchmark year available (1860). The analysis could thus be still capturing the downward part of the U-shape. In terms of the econometric analysis, the OLS regression confirms the predicted pattern through the significance of GDP per capita both in their quadratic and cubic forms.

One interesting discussion is on the controls used in the model: here both openness to trade and public expenditure are not significant, in spite of both being strong candidates for explaining regional inequality in the economic geography literature (see Rodríguez-Pose, 2012 on trade and Rodriguez-Pose and Ezcurra, 2010 on public spending). For the first variable (openness of trade), the explanation could be that the detrimental effect of trade on regional inequality could well have been offset by the increased integration of the financial and labour markets during the First Globalization.

Regarding the second control variable, public intervention (measured as public spending as a share of GDP): the authors admit that having a large public sector does not necessarily imply implementing effective cohesion policies. The example of Fascist Italy on this point is very illustrative: the 1920s and 1930s witnessed rising inequality in Italy, in spite of a growing intervention by the State in the economy and an alleged intent to favor the most backward parts of the country. In general, the impression is that more than one mechanism that is well present in empirical studies after WWII, might not be so in earlier periods. Finally, the authors test for the role of structural change in shaping regional inequality, which was the original explanation by Williamson (1965). This is measured as the non-agricultural value added and it is positive and significant in explaining the coefficient of variation of overall GDP per capita.

Although the paper represents an important step forward for explaining historical regional divergence, several aspects could be addressed in the future by either the authors or by other scholars in the same field. For instance, the use of only four countries from a specific part of Europe does not yet allow drawing general conclusions on the relationship between economic growth and inequality in the long run. As mentioned in the paper, several case studies from other parts of Europe do not entirely fit in the same path: this is the case of Belgium (Buyst, 2011) or Sweden (Enflo and Missiaia, 2018). It is possible that including more advanced economies such as Britain or even some peripheral but Northern ones in the sample might lead to re-consider the increase of regional inequality during modern industrial growth as a golden rule.

References

Barrios, S., Strobl, E., 2009. “The Dynamics of Regional Inequalities.” Regional Science and Urban Economics 39 (5), 575-591

Buyst, E., 2011. “Continuity and Change in Regional Disparities in Belgium during the Twentieth Century.” Journal of Historical Geography 37 (3), 329-337

Díez Minguela, A., González-Val, R., Martínez-Galarraga, J., Sanchis, M. T., and Tirado, D. 2017. “The Long-term Relationship Between Economic Development and Regional Inequality: South-West Europe, 1860-2010.” EHES Working Papers in Economic History 119

Enflo, K. and Missiaia, A. 2017. “Between Malthus and the Industrial Take-off: Regional Inequality in Sweden, 1571-1850.” Lund Papers in Economic History

Kuznets, S., 1955. “Economic Growth and Income Inequality.” American Economic Review 45 (1), 1-28

Rodríguez-Pose, A., 2012. “Trade and Regional Inequality.” Economic Geography 88 (2), 109-136

Rodríguez-Pose, A., Ezcurra, R., 2010. “Does Decentralization Matter for Regional Disparities? A Cross-Country Analysis.” Journal of Economic Geography 10 (5), 619-644.

Williamson, J.G., 1965. “Regional Inequality and the Process of National Development: a Description of the Patterns.” Economic Development and Cultural Change 13 (4), 1-8

 

Industrialization, Gold, and Empires: Trade Collapse in the Great Recession vs. the Great Depression

Two Great Trade Collapses: The Interwar Period & Great Recession Compared

by Kevin Hjortshøj O’Rourke (All Souls, University of Oxford)

Abstract: In this paper, I offer some preliminary comparisons between the trade collapses of the Great Depression and Great Recession. The commodity composition of the two trade collapses was quite similar, but the latter collapse was much sharper due to the spread of manufacturing across the globe during the intervening period. The increasing importance of manufacturing also meant that the trade collapse was more geographically balanced in the later episode. Protectionism was much more severe during the 1930s than after 2008, and in the UK case at least helped to skew the direction of trade away from multilateralism and towards Empire. This had dangerous political consequences.

URL: https://econpapers.repec.org/paper/cprceprdp/12286.htm

Distributed by NEP-HIS on 2017-09-24

Review by Anna Missiaia

Comparisons between the Great Depression of the 1930s and the Great Recession of 2008-10 have been performed by several scholars interested in the lessons that we could draw from history. Famous examples are Eichengreen’s Hall of Mirrors: The Great Depression, The Great Recession, and the Uses-and Misuses-of History in which the economic policies in the two crisis are compared, or Crafts and Fearon’s The Great Depression of the 1930s: Lessons for Today” in which contribution from a variety of fields are collected. The paper by Kevin O’Rourke proposed here contributes to the same line of research by using a large body of empirical evidence on both the Great Depression and the Great Recession to compare the different outcomes on trade of the two crises. In both the 1930s and 2008-10, the level of global trade experienced a contraction; however, the effect was initially more sever in the latter but much more persistent in the former, pointing to different dynamics in the two cases. Figure 1 illustrates the two trajectories.

 

Figure 1: World Trade during the Great Depression and the Great Recession: months after June 1929 and April 2008

According to the author, the striking different behaviors of trade in the two crises are linked to a different composition of the world exports. On the eve of the Great Depression, industrial products accounted for roughly 44% of total trade; in 2007 the same figure had risen to 70%. This is important in the light of different volatilities of these two broad classes of goods. Figure 2 shows world trade divided into manufacturing and non-manufacturing during the Great Depression while Figure 3 shows the same for the Great Recession.

 

Figure 2: Manufacturing and Non-Manufacturing World Trade during the Great Depression, 1929-1940

Figure 2: Manufacturing and Non-Manufacturing World Trade during the Great Depression, 2008-2015

From these two graphs, we see that in both cases non-manufacturing trade (basically composed of agricultural products) did not collapse but it was rather the manufacturing exporting sector that suffered the most (of course this is in terms of volumes, not prices). The compositional effect therefore explains the much more violent decrease in the first years of the Great Recession, but also the faster recovery (although the former is discussed by O’Rourke much more in detail compared to the latter). O’Rourke illustrates this compositional effect using counter-factual analysis which basically applies the shares of manufacturing and non-manufacturing of 2007 to trade during the Great Depression, showing that the pattern is very much changed depending on the composition. The different share of manufacturing during the two crises is driven by the catch up of the periphery, and in particular Asia, which was during the Great Recession much closer to the level of industrialization of the core countries, leading to a more “regionally balanced” shock at world level.

The Great Depression had seen a deterioration of the terms of trade of developing countries, leading to an increase in protectionist measures. O’Rourke suggests that one of the explanations to both the depression and the protectionist measures is found in the monetary regime: the Gold Standard had deprived countries of the possibility to implement counter-cyclical monetary policies, leading to the sole use of protectionist policies in the attempt to contrast the former. The lack of coordination among countries, which got off gold in different moments, made the late movers deal with an overvalued currency which worsened their position even further.  The paper also contains a positive assessment of the crisis response after the 2008 crash, when countries behaved in a much more coordinated fashion and were able to apply monetary and fiscal stimulus which ultimately led to a much shorter contraction of trade worldwide.

Figure 4: Victims of High Tariffs during the Great Depression.

Using again a counterfactual analysis, O’Rourke (citing his work with de Bromhead et al., 2017) shows that also the existence of trading blocs, and notably the British Empire, led to a “balkanization” of trade during the 1930s. This ultimately led to a contraction of overall trade that was not observed in the much more multilateral trade environment of 2008-10. More multilateralism also led to more efficient specialization worldwide and therefore to a milder effect of the crisis on trade.

The paper provides several policy-oriented results that should be considered in times of economic crisis (and to some extent cast a positive light on how the latest crisis has been handled). The first result is that multilateralism in trade is good for everyone because of its expansive effect of trade. The recent attacks to multilateral trade agreements, for instance through the threat by the US to leave NAFTA or by the UK to leave the EU single market, are dangerous both economically and politically. The paper also contains a historically grounded praise of the monetary and fiscal policies pursued in this latest crisis compared to the detrimental ones in the 1930s. Maybe, after all, we do learn from our mistakes and this is also thank to the efforts by economic historians.

Bibliography

Crafts, N. and P. Fearon (2013) The Great Depression of the 1930s: Lessons for Today. Oxford: Oxford University Press.

de Bromhead, A., A. Fernihough, M. Lampe and K. H. O’Rourke (2017) “When Britain Turned Inward: Protection and the Shift towards Empire in Interwar Britain”, CEPR Discussion paper 11835.

Eichengreen, B. (2015) Hall of Mirrors: The Great Depression, The Great Recession, and the Uses-and Misuses-of History. Oxford: Oxford University Press.

 

 

Challenging the Role of Capital Adequacy using Historical Data

Bank Capital Redux: Solvency, Liquidity, and Crisis
By Òscar Jordà (Federal Reserve Bank of San Francisco and University of California Davis), Bjorn Richter (University of Bonn), Moritz Schularick (University of Bonn) and Alan M. Taylor (University of California Davis).

Abstract: Higher capital ratios are unlikely to prevent a financial crisis. This is empirically true both for the entire history of advanced economies between 1870 and 2013 and for the post-WW2 period, and holds both within and between countries. We reach this startling conclusion using newly collected data on the liability side of banks’ balance sheets in 17 countries. A solvency indicator, the capital ratio has no value as a crisis predictor; but we find that liquidity indicators such as the loan-to-deposit ratio and the share of non-deposit funding do signal financial fragility, although they add little predictive power relative to that of credit growth on the asset side of the balance sheet. However, higher capital buffers have social benefits in terms of macro-stability: recoveries from financial crisis recessions are much quicker with higher bank capital.

URL: http://econpapers.repec.org/paper/nbrnberwo/23287.htm

Distributed by NEP-HIS on: 2017-05-07

Review by Tony Gandy (London Institute of Banking and Finance)

In 1990-1991 I started a new job, having nearly completed my PhD (which I fully admit I took longer than it should). I joined The Banker, part of the Financial Times group, and proceeded to cover bank statistics, research and bank technology (the latter being a bit of a hobby). Thanks to the fine work of my predecessor, Dr. James Alexander, we had been through a statistical revolution and had revamped our Top 1000 listings of the world’s biggest banks, moving to a ranking based on capital rather than assets. This was the zeitgeist of the moment; what counted was capital, an indicator of capacity to lend and absorb losses. We then also ranked banks by the ratio of loss absorbing capital to total assets to show which were the “strongest” banks. We were modeling this on the progress made by the Basel Committee on Banking Supervision in refocusing banking resilience on to this important ratio, so called capital adequacy and the acknowledging the development and launch of the original Basel Accord.

All well and good, the role of capital was to absorb losses. It seemed on the face of it, that whichever bank had the most capital, and which ever could show the best capital adequacy ratio was clearly the most robust, prudent and advanced manager of risk and the one able to take on more business.

As the years progressed, Basel 1.5, II, 2.5, III and, arguably, IV have each added to or detracted from the value of capital as a guide to robustness. However, the principle still seemed to stand that, if you had a very large proportion of capital, you could absorb greater losses making the bank and the wider economic system more robust. Yes, OK there were weaknesses. Under the original Accord, only the only risk being worries about was credit risk and in only a very rudimentary way. This seemed odd given that one of the events which led to the Basel Accord was the failure and subsequent market meltdown caused by the failure of Bankhaus Herstatt [1] (Goodhart 2011), but it was hard to see how that was in isolation a credit event. Nevertheless, through all the subsequent crises and reforms to the Basel Accords, the principle that a higher proportion of quality capital to assets held by a bank was a good thing.

Jordà, Richter, Schularick and Taylor challenge the assumption that greater capital adequacy can deflect crisis, though they do find that higher initial capital ratios have a great benefit in the post crisis environment. In this working paper, Jordà et al. create a data set focusing on the liability side of bank balance sheets covering a tight definition of Common Equity Tier 1 capital (paid up capital, retained profit and disclosed reserves), deposits and non-core funding (wholesale funding). This is a powerful collection of numbers. They have collated this data for 14 advanced economies from 1870 through to 2013 and for three others for a slightly shorter period.

One note is that it would have been interesting to see a little more detail on the sources of the data used. Journal papers and academic contributions are acknowledged throughout, but other sources are covered by “journal papers, central bank publications, historical yearbooks from statistical offices, as well as archived annual reports from individual banks”. Bank statistics can be a complex area, some sources have sometimes got their definitions wrong (one annual listing of bank capital had an erratum which was nearly as long as the original listings, not mine I hasten to add and maybe my memory, as a rival to that publication, somewhat exaggerates!), so a little more detail would be useful. Also, further discussion of the nature of disclosed reserves would be interesting as one of the key concerns of bank watchers in the past has been the tendency of banks to not disclose reserves or their purposes.

Jordà et al.’s findings are stark. Firstly, and least surprisingly, bank leverage has greatly increased. The average bank capital ratio in the dataset shows that in early period it hovered at around 30% of unadjusted assets, falling to 10% in the post war years and more recently hovering around 5-10%.

image1

Source: Jordà et al. (2017)

Next, they consider the relevance of capital adequacy as a protection for banks and a predictor of a banking system’s robustness; does a high, prudent, level of capital reduce the chances of a financial crisis? The authors note the traditional arguments that higher levels of capital could indicate a robust banking system able to absorb unexpected losses and thus reducing the chance of a financial crisis, but also note that high capital levels could equally indicate a banking system taking greater risks and therefore needing greater amounts of capital to survive the risks. They find no statistical link between higher capital ratios and lower risk of systemic financial crisis, indeed, they find limited evidence that it could be the reverse. It’s worth noting a second time: Increasing capital ratios do not indicate a lower risk of a financial crisis

The authors do note, however, that high levels and rapidly increasing loan-to-deposit ratios are a significant indicator of future financial distress. Clearly, funding a bubble is a bad idea, though it can be hard to resist.

However, capital can have a positive role. The paper finds that systems which start with higher levels of leverage (and consequently lower capital ratios) will find recovery after a crisis harder as banks struggle to maintain solvency and liquidate assets at a greater rate. Thus, while a high capital adequacy ratio may not be a protection against a systemic crisis, it can provide some insight into the performance of an economy after a crunch as banks with higher capital ratios may not face the same pressure to sell and further deflate asset prices and economic activity. Therefore, capital can have a positive role!

image2

Source: Jordà et al (2017)

I won’t pretend to understand fully the statistical analysis presented in this paper, however, while many, including those at the Basel Committee, have recognised the folly of tackling only prudential control through a purely credit risk-focus on capital adequacy and have introduced new liquidity, leverage and scenario planning structure to deflect other routes to crisis. Nevertheless, Jordà et al. provide a vital insight into what is still the very core of the prudential control regime: the value, or not, of capital in providing protection to banks and banking systems. Its role may not be what we expected, its value being in a post-crisis environment and not a pre-crisis environment where higher requirements could have been expected to head-off problems. Instead they find that it is credit booms and indicators of them, such as rapidly rising Loan to Deposit ratios which are better indicators of looming crisis, and capital is more relevant to making brief the impact of an unravelling bubble.

On a more practical note, this fascinating paper offer those who teach prudential regulation to bankers or students a wealth of data and challenges to consider, a welcome resource indeed.

Notes:

[1] The other main response was the more appropriate formation of the first netting services and then the Continuously Linked Settlement Bank as a method of improving operations to remove the risk which became known as “Herstatt Risk”.

References
Goodhart, Charles (2011) The Basel Committee on Banking Supervision: a history of the early years, 1974–1997. Cambridge University Press, Cambridge, UK

 

No man can serve two masters

Rogue Trading at Lloyds Bank International, 1974: Operational Risk in Volatile Markets

By Catherine Schenk (Glasgow)

Abstract Rogue trading has been a persistent feature of international financial markets over the past thirty years, but there is remarkably little historical treatment of this phenomenon. To begin to fill this gap, evidence from company and official archives is used to expose the anatomy of a rogue trading scandal at Lloyds Bank International in 1974. The rush to internationalize, the conflict between rules and norms, and the failure of internal and external checks all contributed to the largest single loss of any British bank to that time. The analysis highlights the dangers of inconsistent norms and rules even when personal financial gain is not the main motive for fraud, and shows the important links between operational and market risk. This scandal had an important role in alerting the Bank of England and U.K. Treasury to gaps in prudential supervision at the end of the Bretton Woods pegged exchange-rate system.

Business History Review, Volume 91 (1 – April 2017): 105-128.

DOI: https://doi.org/10.1017/S0007680517000381

Review by Adrian E. Tschoegl (The Wharton School of the University of Pennsylvania)

Since the 1974 rogue trading scandal at Lloyds’s Lugano branch we have seen more spectacular sums lost in rogue trading scandals. What Dr Catherine Schenk brings to our understanding of these recurrent events is the insight that only drawing on archives, both at Lloyds and at the Bank of England, can bring. In particular, the archives illuminate the decision processes at both institutions as the crisis unfolded. I have little to add to her thorough exposition of the detail so below I will limit myself to imprecise generalities.

Marc Colombo, the rogue trader at Lloyds Lugano, was a peripheral individual in a peripheral product line, in a peripheral location. As Schenk finds, this peripherality has two consequences, the rogue trader’s quest for respect, and the problem of supervision. Lloyds Lugano is not an anomaly. An examination of several other cases (e.g. Allied Irish, Barings, Daiwa, and Sumitomo Trading), finds the same thing (Tschoegl 2004).

In firms, respect and power come from being a revenue center. Being a cost center is the worst position, but being a profit center with a mandate to do very little is not much better. The rogue traders that have garnered the most attention, in large part because of the scale of their losses were not malevolent. They wanted to be valued. They were able to get away with their trading for long enough to do serious damage because of a lack of supervision, a lack that existed because of the traders’ peripherality.

In several cases, Colombo’s amongst them, the trader was head of essentially a one-person operation that was independent of the rest of the local organization. That meant that the trader’s immediate local supervisor had little or no experience with trading. Heads of branches in a commercial bank come from commercial banking, especially commercial lending. Commercial lending is a slow feedback environment (it may take a long time for a bad decision to manifest itself), and so uses a system of multiple approvals. Trading is a fast feedback environment. The two environments draw different personality types and have quite different procedures, with the trading environment giving traders a great deal of autonomy within set parameters, an issue Schenk addresses and that we will discuss shortly.

Commonly, traders will report to a remote head of trading and to the local branch manager, with the primary line being to the head of trading, and the secondary line being to the local branch manager. This matrix management developed to address the problem of the need to manage and coordinate centrally but also respond locally, but matrix management has its limitations too. As Mathew points out in the New Testament, “No man can serve two masters, for either he will hate the one, and love the other; or else he will hold to the one, and despise the other” (Matthew (6:24). Even short of this, the issue that can arise, as it did at Lloyds Luggano, is that the trader is remote from both managers, one because of distance (and often time zone), and the other because of unfamiliarity with the product line. A number of software developments have improved the situation since 1974, but as some recent scandals have shown, they are fallible. Furthermore, the issue still remains that at some point the heads of many product lines will report to someone who rose in a different product line, which brings up the spectre of “too complex to manage”.

The issue of precautionary or governance rules, and their non-enforcement, is a clear theme in Schenk’s paper. Like the problem of supervision, this too is an issue where one can only do better or worse, but not solve. All rules have their cost. The largest may be an opportunity cost. Governance rules exist to reduce variance, but that means the price of reducing bad outcomes is the lower occurrence of good outcomes. While it is true, as one of Schenk’s interviewees points out, that one does not hear of successful rogue traders being fired, that does not mean that firms do not respond negatively to success. I happened to be working for SBCI, an investment banking arm of Swiss Bank Corporation (SBC), at the time of SBC’s acquisition in 1992 of O’Connor Partners, a Chicago-based derivatives trading house. I had the opportunity to speak with O’Conner’s head of training when O’Connor stationed a team of traders at SBCI in Tokyo. He said that the firm examined too large wins as intently as they examined too large losses: in either case an unexpectedly large outcome meant that either the firm had mis-modelled the trade, or the trader had gone outside their limits. Furthermore, what they looked for in traders was the ability to walk away from a losing bet.

But even small costs can be a problem for a small operation. When I started to work for Security Pacific National Bank in 1976, my supervisor explained my employment benefits to me. I was authorized two weeks of paid leave per annum. When I asked if I could split up the time he replied that Federal Reserve regulations required that the two weeks be continuous so that someone would have to fill in for the absent employee. Even though most of the major rogue trading scandals arose and collapsed within a calendar year, the shadow of the future might well have discouraged the traders, or led them to reveal the problem earlier. Still, for a one-person operation, management might (and in some rogue trading scandals did), take the position that finding someone to fill in and bring them in on temporary duty was unnecessarily cumbersome and expensive. After all, the trader to be replaced was a dedicated, conscientious employee, witness his willingness to forego any vacation.

Lastly, there is the issue of Chesterton’s Paradox (Chesterton 1929). When a rule has been in place for some time, there may be no one who remembers why it is there. Reformers will point out that the rule or practice is inconvenient or costly, and that it has never in living memory had any visible effect. But as Chesterton puts it, “This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable.”

Finally, an issue one needs to keep in mind in deciding how much to expend on prevention is that speculative trading is a zero-sum activity. A well-diversified shareholder who owns both the employer of the rogue trader and the employers of their counterparties suffers little loss. The losses to Lloyds Lugano were gains to, inter alia, Crédit Lyonnais.

There is leakage. Some of the gainers are privately held hedge funds and the like. Traders at the counterparties receive bonuses not for skill but merely for taking the opposite side of the incompetent rogue trader’s orders. Lastly, shareholders of the rogue traders firm suffer deadweight losses of bankruptcy when the firm, such as Barings, goes bankrupt. Still, as Krawiec (2000) points out, for regulators the social benefit of preventing losses to rogue traders may not exceed the cost. To the degree that costs matter to managers, but not shareholders, managers should bear the costs via reduced salaries.

References

Chesterton, G. K. (1929) ‘’The Thing: Why I Am A Catholic’’, Ch. IV: “The Drift From Domesticity”.

Krawiec, K.D. (2000): “Accounting for Greed: Unraveling the Rogue Trader Mystery”, Oregon Law Review 79 (2):301-339.

Tschoegl, A.E. (2004) “The Key to Risk Management: Management”. In Michael Frenkel, Ulrich Hommel and Markus Rudolf, eds. Risk Management: Challenge and Opportunity (Springer-Verlag), 2nd Edition;

Debt forgiveness in the German mirror

The Economic Consequences of the 1953 London Debt Agreement

By Gregori Galofré-Vilà (Oxford), Martin McKee (London School of Hygiene and Tropical Medicine), Chris Meissner (UC Davis) and David Stuckler (Oxford)

Abstract: In 1953 the Western Allied powers implemented a radical debt-relief plan that would, in due course, eliminate half of West Germany’s external debt and create a series of favourable debt repayment conditions. The London Debt Agreement (LDA) correlated with West Germany experiencing the highest rate of economic growth recorded in Europe in the 1950s and 1960s. In this paper we examine the economic consequences of this historical episode. We use new data compiled from the monthly reports of the Deutsche Bundesbank from 1948 to the 1960s. These reports not only provide detailed statistics of the German finances, but also a narrative on the evolution of the German economy on a monthly basis. These sources also contain special issues on the LDA, highlighting contemporaries’ interest in the state of German public finances and public opinion on the debt negotiation. We find evidence that debt relief in the LDA spurred economic growth in three main ways: creating fiscal space for public investment; lowering costs of borrowing; and stabilising inflation. Using difference-in-differences regression models comparing pre- and post LDA years, we find that the LDA was associated with a substantial rise in real per capita social expenditure, in health, education, housing, and economic development, this rise being significantly over and above changes in other types of spending that include military expenditure. We further observe that benchmark yields on long-term debt, an indication of default risk, dropped substantially in West Germany when LDA negotiations began in 1951 and then stabilised at historically low rates after the LDA was ratified. The LDA coincided with new foreign borrowing and investment, which in turn helped promote economic growth. Finally, the German currency, the deutschmark, introduced in 1948, had been highly volatile until 1953, after which time we find it largely stabilised.

URL: http://EconPapers.repec.org/RePEc:nbr:nberwo:22557

Distributed by NEP-HIS on 2016-09-04

Review by Natacha Postel-Vinay (LSE)

The question of debt forgiveness is one that has drawn increased attention in recent years. Some have contended that the semi-permanent restructuring of Greece’s debt has been counterproductive and that what Greece needs is at least a partial cancellation of its debt. This, it is argued, would allow both faster growth and a higher likelihood of any remaining debt repayment. Any insistence on the part of creditors for Greece to pay back the full amount through austerity measures would be self-defeating.

One problem with this view is that we know very little about whether debt forgiveness can lead to faster growth. Reinhart and Trebesch (2016) test this assumption for 45 countries between 1920-1939 and 1978-2010, and do find a positive relationship. However they leave aside a particularly striking case: that of Germany in the 1950s, which benefited from one of the most generous write-offs in history while experiencing “miracle” growth of about 8% in subsequent years. This case has attracted much attention recently given German leaders’ own insistence on Greek debt repayments (see in particular Ritschl, 2011; 2012; Guinnane, 2015).

Eichengreen and Ritschl (2009), rejecting several popular theories of the German miracle, such as a reallocation of labour from agriculture to industry or the weakening of labour market rigidities, already hypothesized that such debt relief may have been an important factor in Germany’s super-fast and sustained post-war growth. Using new data from the monthly reports of the Deutsche Bundesbank from 1948 to the 1960s, Gregori Galofré-Vilà, Martin McKee, Chris Meissner and David Stuckler (2016) attempt to formally test this assumption, and are quite successful in doing so.

By the end of WWII Germany had accumulated debt to Europe worth nearly 40% of its 1938 GDP, a substantial amount of which consisted in reparation relics from WWI. Some already argued at the time that these reparations and creditors’ stubbornness had plagued the German economy, which in the early 1930s felt constrained to implement harsh austerity measures, thus contributing to the rise of the National Socialists to power. It was partly to avoid a repeat of these events that the US designed the Marshall Plan to help the economic reconstruction of Europe post-WWII.

versailles-hitler

 

Marshall aid to Europe between 1948 and 1951 was less substantial than is commonly thought, but it came with strings attached which may have indirectly contributed to German growth. In particular, one of the conditions France and the UK had to fulfil in order to become recipients of Marshall aid was acceptance that Germany would not pay back any of its debt until it reimbursed its own Marshall aid. Currency reform in 1948 and the setting up of the European Payments Union facilitated this process.

Then came the London Debt Agreement, in 1953, which stipulated generous conditions for the repayment of half the amount due from Germany. Notably, it completely froze the other half, or at least until reunification, which parties to the agreement expected would take decades to occur. There was no conference in 1990 to settle the remainder.

just-wanted-to-point-out

Galofré-Vilà et al. admit not being able to directly test the hypothesis that German debt relief led to faster growth. Instead, making use of simple graphs, they look at how the 1953 London Debt Agreement (LDA) led to lower borrowing costs and lower inflation, which comes out as obvious and quite sustained on both charts.

Perhaps more importantly, they measure the extent to which the LDA freed up space for social welfare investment. For this, they make use of the fact that Marshall aid had mainly been used for infrastructure building, so that the big difference with the LDA in terms of state expenditure should have been in terms of health, education, “economic development,” and housing. Then they compare the amount of spending on these four heads to spending in ten other categories before 1953, and check whether this difference gets any larger after the LDA. Perhaps unsurprisingly, it does, and significantly so.

This way of testing the hypothesis that the LDA helped the German economy may strike some as too indirect and therefore insufficient. This is without mentioning possible minor criticisms such as the fact that housing expenditure is included in the treatment, not control group (despite the 1950 Housing Act), or that the LDA is chosen as the key event despite the importance of the Marshall Plan’s early debt relief measures.

Nevertheless testing such a hypothesis is necessarily a very difficult task, and Galofré-Vilà et al.’s empirical design can be considered quite creative. They are of course aware that this cannot be the end of the story, and they are careful to caution readers against hasty extrapolations from the post-war German case to the current Spanish or Greek situation. Some of their arguments have somewhat unclear implications (for instance, that Germany at the time represented 15% of the Western population at the time, whereas the Greek population represents only 2%).

germany-wwii-debts

Perhaps a stronger argument would be that Germany’s post-war debt was of a different character than Greek’s current debt: some would even call it “excusable” because it was mainly war debt; it was not (at least arguably) a result of past spending excesses. For this reason, one may at least ask whether debt forgiveness in the Greek context would have the same — almost non-existent — moral hazard effects as in the German case. Interestingly, the authors point out that German debt repayment after the LDA was linked to Germany’s economic growth and exports (so that the debt service/export revenue ratio could not exceed 3%). This sort of conditionality is strangely somewhat of a rarity among today’s sovereign debt contracts. It could be seen as a possible solution to fears of moral hazard, thereby mitigating any differences in efficiency of debt relief emanating from differences in the nature of the debt contracted.

 

References

Eichengreen, B., & Ritschl, A. (2009). Understanding West German economic growth in the 1950s. Cliometrica, 3(3), 191-219.

Guinnane, T. W. (2015). Financial vergangenheitsbewältigung: the 1953 London debt agreement. Yale University Economic Growth Center Discussion Paper, (880).

Reinhart, C. M., & Trebesch, C. (2014). A distant mirror of debt, default, and relief (No. w20577). National Bureau of Economic Research.

Ritschl, A. (2011). “Germany owes Greece a debt.” in The Guardian. Tuesday 21 June 2011.

Ritschl, A. (2012). “Germany, Greece and the Marshall Plan.” In The Economist. Friday 15 June.

Lessons from ‘Too Big to Fail’ in the 1980s

Can a bank run be stopped? Government guarantees and the run on Continental Illinois

Mark A Carlson (Bank for International Settlements) and Jonathan Rose (Board of Governors of the Federal Reserve)

Abstract: This paper analyzes the run on Continental Illinois in 1984. We find that the run slowed but did not stop following an extraordinary government intervention, which included the guarantee of all liabilities of the bank and a commitment to provide ongoing liquidity support. Continental’s outflows were driven by a broad set of US and foreign financial institutions. These were large, sophisticated creditors with holdings far in excess of the insurance limit. During the initial run, creditors with relatively liquid balance sheets nevertheless withdrew more than other creditors, likely reflecting low tolerance to hold illiquid assets. In addition, smaller and more distant creditors were more likely to withdraw. In the second and more drawn out phase of the run, institutions with relative large exposures to Continental were more likely to withdraw, reflecting a general unwillingness to have an outsized exposure to a troubled institution even in the absence of credit risk. Finally, we show that the concentration of holdings of Continental’s liabilities was a key dynamic in the run and was importantly linked to Continental’s systemic importance.

URL: http://EconPapers.repec.org/RePEc:bis:biswps:554

Distributed on NEP-HIS 2016-4-16

Review by Anthony Gandy (ifs University College)

I have to thank Bernardo Batiz-Lazo for spotting this paper and circulating it through NEP-HIS, my interest in this is less research focused than teaching focused. Having the honour of teaching bankers about banking, sometimes I am asked questions which I find difficult to answer. One such question has been ‘why are inter-bank flows seen as less volatile, than consumer deposits?’ In this very accessible paper, Carlson and Rose answers this question by analysing the reality of a bank run, looking at the raw data from the treasury department of a bank which did indeed suffer a bank run: Continental Illinois – which became the biggest banking failure in US history when it flopped in 1984.

images

For the business historian, the paper may lack a little character as it rather skimps over the cause of Continental’s demise, though this has been covered by many others, including the Federal Deposit Insurance Corporation (1997). The paper briefly explains the problems Continental faced in building a large portfolio of assets in both the oil and gas sector and developing nations in Latin America. A key factor in the failure of Continental in 1984, was the 1982 failure of the small bank Penn Square Bank of Oklahoma. Cushing, Oklahoma is the, quite literally, hub (and one time bottleneck) of the US oil and gas sector. The the massive storage facility in that location became the settlement point for the pricing of West Texas Intermediate (WTI), also known as Texas light sweet, oil. Penn Square focused on the oil sector and sold assets to Continental, according the FDIC (1997) to the tune of $1bn. Confidence in Continental was further eroded by the default of Mexico in 1982 thus undermining the perceived quality of its emerging market assets.

Depositors queuing outside the insolvent Penn Square Bank (1982)

Depositors queuing outside the insolvent Penn Square Bank (1982)

In 1984 the failure of Penn would translate into the failure of the 7th largest bank in the US, Continental Illinois. This was a great illustration of contagion, but contagion which was contained by the central authorities and, earlier, a panel of supporting banks. Many popular articles on Continental do an excellent job of explaining why its assets deteriorated and then vaguely discuss the concept of contagion. The real value of the paper by Carlson and Rose comes from their analysis of the liability side of the balance sheet (sections 3 to 6 in the paper). Carlson and Rose take great care in detailing the make up of those liabilities and the behaviour of different groups of liability holders. For instance, initially during the crisis 16 banks announced a advancing $4.5bn in short term credit. But as the crisis went forward the regulators (Federal Deposit Insurance Corporation, the Federal Reserve and the Office of the Comptroller of the Currency) were required to step in to provide a wide ranging guarantee. This was essential as the bank had few small depositors who, in turn, could rely on the then $100,000 depositor guarantee scheme.

053014_fmf

It would be very easy to pause and take in the implications of table 1 in the paper. It shows that on the 31st March 1984, Continental had a most remarkable liability structure. With $10.0bn of domestic deposits, it funded most of its books through $18.5bn of foreign deposits, together with smaller amounts of other wholesale funding. However, the research conducted by Carlson and Rose showed that the intolerance of international lenders, did become a factor but it was only one of a number of effects. In section 6 of the paper they look at the impact of funding concentration. The largest ten depositors funded Continental to the tune of $3.4bn and the largest 25 to $6bn dollars, or 16% of deposits. Half of these were foreign banks and the rest split between domestic banks, money market funds and foreign governments.

Initially, `run off’, from the largest creditors was an important challenge. But this was related to liquidity preference. Those institutions which needed to retain a highly liquid position were quick to move their deposits out of Continental. One could only speculate that these withdrawals would probably have been made by money market funds. Only later, in a more protracted run off, which took place even after interventions, does the size of the exposure and distance play a disproportionate role. What is clear is the unwillingness of distant banks to retain exposure to a failing institution. After the initial banking sector intervention and then the US central authority intervention, foreign deposits rapidly decline.

It’s a detailed study, one which can be used to illustrate to students both issues of liquidity preference and the rationale for the structures of the new prudential liquidity ratios, especially the Net Stable Funding Ratio. It can also be used to illustrate the problems of concentration risk – but I would enliven the discussion with the addition of the more colourful experience of Penn Square Bank- a banks famed for drinking beer out of cowboy boots!

References

Federal Deposit Insurance Corporation, 1997. Chapter 7 `Continental Illinois and `Too Big to Fail’ In: History of the Eighties, Lessons for the Future, Volume 1. Available on line at: https://www.fdic.gov/bank/historical/history/vol1.html

More general reads on Continental and Penn Square:

Huber, R. L. (1992). How Continental Bank outsourced its” crown jewels. Harvard Business Review, 71(1), 121-129.

Aharony, J., & Swary, I. (1996). Additional evidence on the information-based contagion effects of bank failures. Journal of Banking & Finance, 20(1), 57-69.