A Farewell to Arms? The Consequences of Warfare in Sub-Sahara Africa

Is Africa Different? Historical Conflict and State Development


Mark Dincecco (University of Michigan dincecco@umich.edu)

James Fenske (University of Oxford james.fenske@economics.ox.ac.uk)

Massimiliano Gaetano Onorato (IMT Institute for Advanced Studies Lucca massimiliano.onorato@imtlucca.it)

ABSTRACT: We show that the consequences of historical warfare for state development differ for Sub-Saharan Africa. We identify the locations of more than 1,500 conflicts in Africa, Asia, and Europe from 1400 to 1799. We find that historical warfare predicts common-interest states defined by high fiscal capacity and low civil conflict across much of the Old World. For Sub-Saharan Africa, historical warfare predicts special-interest states defined by high fiscal capacity and high civil conflict. Our results offer new evidence about where and when war makes states.

URL:  http://d.repec.org/n?u=RePEc:ial:wpaper:8/2015&r=all

Distributed by NEP-HIS on 2015-09-05

Review by Anna Missiaia

The consequences of war on the development of nations have been gaining increasing attention in both Economics and Economic History alike. This paper by Dincecco, Frenske and Onorato, distributed on NEP-HIS on 2015-09-05 studies the consequences of wars on state development for Sub-Saharan Africa.

The paper refers to a rather large body of research developed within the field of Political Economics. The standard account, mostly focused on the European experience, predicts that the rise of warfare will lead, after the end of a conflict, to greater fiscal capacity and less civil conflict. The mechanism was first studied for Europe in the period 1500-1800 by Tilly (1993). Rulers generally had little political consequences from defeats, at least until the early 1800s, when Napoleon started replacing monarchs who had lost wars. Before then, wars were a quite regular phenomenon. Wars led to the expansion of the sources of taxations which was easily maintained in peace time. This enabled European states to enforce internal security more effectively, lowering civil conflict. The major implication of this perspective is that countries that experienced more wars in the past, today show greater fiscal capacity and less civil conflict (Fearon and Laiting, 2014; Besley and Persson, 2015).

As noted existing research focuses on Europe, so it is interesting to see that Dincecco, Frenske and Onorato (DFO) find different results when applying the same premises to Sub-Saharan Africa.  The paper by DFO begins by presenting two opposing views. On the one hand, there is evidence that the standard account of more wars in the past lead to greater fiscal capacity and less conflict today also applies to Sub-Sahara Africa. Specifically Michalopoulos and Papaioannou (2013) document evidence suggesting that more conflicts lead to more state centralization. Meanwhile that of Bates (2014) suggests that more centralized states are the most developed in the African continent. On the other hand, the opposing view focuses on a series of characteristics of the Sub-Saharan region (such as slave trade and colonization) that are responsible for the failure by the standard account to explain the trajectory of African states.


The Battle of Rocroi, by Augusto Ferrer-Dalmau.

The paper by DFO takes a comparative approach, testing the relationship between historical warfare and state development in several continents. The empirical strategy is rather intuitive, taking four measure of fiscal capacity of states today and regressing them on the number of conflicts that affected each region. They include a set of standard controls (latitude, population density, arable land and so on) and also continental fixed effects.  The same procedure is then repeated for three measure of civil conflict today.

The first result is that fiscal capacity today does increase in all continents for countries that experienced more wars in the past. Sub-Sahara Africa makes no exception here. The second result deals with civil conflict and this is different. Here, unlike the other continents, Sub-Sahara Africa shows a positive correlation between historical warfare and civil conflict today.

DFO are well aware of the possible shortcomings of their strategy, which are shared with virtually all works trying to address outcomes today caused by institutional arrangements from the past (one above all, Acemoglu et al. 2005). Dincecco and coauthors provide a comprehensive list of robustness checks by adding further observable controls. They also acknowledge that in spite of these controls, unobservable characteristics related to both historical warfare and present state development might still bias their results. They apply a quite interesting methodology to give an idea of the potential bias: they provide a measure, used by authors like Nunn and Wantchekon (2011), that estimates how much greater the impact of unobservable variables should be, relative to the observable, to explain the variation in the data. The result is that unobservable variables would need to have a nearly 20 times stronger impact to explain the variation in the sample. This result of course does not rule out that some of these variables have a role, but it ensure us that a fair amount of the explanatory power lies in the observable variables. Another remarkable feature of the paper by DFO is that it addresses the issue of the time span between the dependent and the explanatory variables. This is in a way a structural issue of all this branch of research, but it is always reassuring to see authors taking it into account. They do so by running the model with intermediate outcomes (around the beginning of the 20th century) and showing that these two showed a similar pattern to today’s.


Somalia’s 1991 civil war

DFO also provide a tentative explanation to why states in Sub-Sahara Africa might behave differently than Europeans. DFO do so by including measures of democratization, ethnic fractionalization and social trust as controls in the regression. They add these one by one, looking at the effect of these controls on the magnitude of the coefficients of interest. The only control here that seems to have an effect on the coefficients is social trust. However, the authors interpret the result with caution because of the small sample size (here only Sub-Sahara Africa is included, lowering the number of observations to only 47).

Regarding the use of measure of social trust to explain the relationship between warfare and fiscal capacity/civil conflict today, I would also be worried about two other points: firstly, the measure of social trust is based on a survey from relatively recent times (1980s onward) while the relationship tested is between historical warfare and fiscal capacity/civil conflict today; secondly, this measure could be highly collinear with the variables considered (of course, the usual caveats on reverse causality that are typical in this line of research also apply here).

To conclude, the paper by DFO contributes to both the debate within Political Economics by quantitatively testing a well-established narrative on a region of the world that is very different from the standard one used in the past (meaning empirical studies based on Europe). By doing so, it does find that Sub-Sahara Africa experienced a different dynamic that led to a different outcome today. It also shows a very careful work on the data used and it addresses several sources of criticism. A possible next step could be to take further the analysis of the mechanism behind through which war impacts state development.


Acemoglu, D., , S. Johnson and J. Robinson (2001). “The Colonial Origins of Comparative Development: An Empirical Investigation.” American Economic Review, 91: 1369-1401.

Bates, R. (2014). “The Imperial Peace,” in E. Akyeampong, R. Bates, N. Nunn, and J. Robinson, eds., Africa’s Development in Historical Perspective, pp. 424-46, Cambridge: Cambridge University Press.

Besley, T. and T. Persson (2015). “State Capacity, Institutions, and Development.” The Political Economist Newsletter.

Fearon, J. and D. Laitin (2014). “Does Contemporary Armed Conflict Have Deep Historical Roots?” Working paper, Stanford University.

Michalopoulos, S. and E. Papaioannou (2011). “The Long-Run Effects of the Scramble for Africa.” NBER Working Paper 17620.

Nunn, N. and L. Wantchekon (2011). “The Slave Trade and the Origins of Mistrust in Africa.” American Economic Review, 101: 3221-52.

Tilly, C. (1992). Coercion, Capital, and European States, 990-1992. Cambridge: Blackwell


A Pre-Protestant Ethic?

Breaking the piggy bank: What can historical and archaeological sources tell us about late‑medieval saving behaviour?

By Jaco Zuijderduijn and Roos van Oosten (both at Leiden University)


Using historical and archeological sources, we study saving behaviour in late-medieval Holland. Historical sources show that well before the Reformation – and the alleged emergence of a ‘Protestant ethic’ – many households from middling groups in society reported savings worth at least several months’ wages of a skilled worker. That these findings must be interpreted as an exponent of saving behaviour – as an economic strategy – is confirmed by an analysis of finds of money boxes: 14th and 15th-century cesspits used by middling-group and elite households usually contain pieces of money boxes. We argue this is particularly strong evidence of late-medieval saving strategies, as money boxes must be considered as ‘self-disciplining’ objects: breaking the piggy bank involved expenses and put a penalty on spending. We also show that the use of money boxes declined over time: they are no longer found in early-modern cesspits. We formulate two hypotheses to explain long-term shifts in saving behaviour: 1) late-medieval socioeconomic conditions were more conducive for small-time saving than those of the early-modern period, 2) in the early-modern Dutch Republic small-time saving was substituted by craft guild insurance schemes.

URL: EconPapers.repec.org/RePEc:ucg:wpaper:0065

Circulated by NEP-HIS on 2015-06-20

Review by Stuart Henderson (Queen’s University Belfast)

Thrift is a central tenet of Max Weber’s Protestant-ethic thesis. That is, characterised by a new asceticism, Protestantism, and specifically Calvinism, encouraged capital accumulation by promoting saving and limiting excessive consumption. However, a recent paper by Jaco Zuijderduijn and Roos van Oosten, and distributed by NEP-HIS on 2015-06-20, challenges this notion. It suggests that a saving ethic was already evident in Holland in the late‑medieval period – well before the Reformation years, and then actually diminished with the coming of Protestantism.

Banking in Genoa in the 14th century.

Banking in Genoa in the 14th century.

Such contradiction with the Weberian thesis is common in the literature, with recent scholarship finding no Protestant effect (Cantoni, forthcoming) or proposing an alternative causal mechanism (Becker and Woessmann, 2009). However, Zuijderduijn and van Oosten’s work adds a fresh perspective by focusing on savings and saving behaviour, and by employing a pre‑versus‑post investigation strategy. Notably, in relation to saving, the literature has generally been more sympathetic to the Weberian thesis, with Delacroix and Nielsen (2001) finding a positive Protestant saving effect, and more recent work by Renneboog and Spaenjers (2012) suggesting that Protestants have a heightened awareness of financial responsibility. Furthermore, the idea of a pre-Protestant ethic, as raised in this paper, has also been advocated in other inquiry. For example, Anderson et al. (2015) suggest that the Catholic Order of Cistercians propagated a Weberian-like cultural change in the appreciation of hard work and thrift before the coming of Protestantism – an analogy which Weber himself noted, and highlight how this had a long‑run effect in development.

Bernard of Clairvaux, (1090–1153 C.E.) belonged to the Cistercian Order of Benedictine monks.

Bernard of Clairvaux, (1090–1153 C.E.) belonged to the Cistercian Order of Benedictine monks.

In their novel approach, Zuijderduijn and van Oosten utilise both historical and archaeological sources to examine savings and saving behaviour over a period which envelopes the coming of the Reformation. This enables them to deal with two principal issues: first, the size and social distribution of savings by utilising tax records for the Dutch town of Edam and its surrounding area, and secondly, whether saving was strategic (or instead due to an inability to spend) by utilising archaeological evidence on the prevalence of money boxes in cesspits for several Dutch towns. Both sources yield complementary results.

The tax records reveal that middling groups were generally accumulating savings in excess of several months of a skilled worker’s wage well in advance of the Reformation. However, between 1514 and 1563, with the coming of Protestantism, the proportion of households holding cash actually fell, despite a rise in average sums held. Unsurprisingly, cash holding was consistently more common among the wealthier groups in society across all years. See figure 3 from the paper below.

Figure 3

While these tax records reveal the extent of saving, it is the archaeological evidence on money box prevalence which provides a means to link this cash holding with saving behaviour due to the disciplining process involved. Breaking the money box meant incurring an expense, and thus penalised spending. Complementing the historical evidence, Zuijderduijn and van Oosten find that, despite their early prevalence, money boxes decline and eventually disappear by the early‑modern period. Moreover, wealthier households, as gauged from the type of material lining the cesspit, tended to save more than poorer households. See figure 6 from the paper below. (Note: brick-lined cesspits were relatively expensive, wood-lined cesspits were less expensive, and unlined cesspits were least expensive.)

Figure 6

Though Zuijderduijn and van Oosten place considerable emphasis on religion in their work, they posit two alternative explanations for the transition in saving behaviour. First, they suggest that a shrinking share of middling groups in conjunction with prices rising quicker than wages (and even possibly a shortage of small change) may have reduced the ability of persons to engage in saving. In addition, they note the rise of craft guild insurance schemes which could have acted as a cushion against sickness or old age much in the same way that saving would have functioned in their absence. Given this, more work needs to be done on ascertaining the role of religion versus these other hypotheses, or alternatively making religion a less central theme in the paper. One potential avenue could be to attempt to identify if households were more likely Protestant or Catholic, or by utilising an alternative source where religious affiliation could be linked with financial holdings. While difficult, this would help to clarify the statement posed by Zuijderduijn and van Oosten in their introduction – “saving behaviour does not come naturally, and requires discipline. Did a Protestant ethic help converts to find such discipline?” Moreover, Zuijderduijn and van Oosten write in their conclusion that their evidence “suggests that the true champions of saving behaviour were the late-medieval adherents to the Church of Rome, and not the Protestants that gradually emerged in sixteenth‑century Holland” – a statement on which I need further convincing.

Further elaboration is also needed on historical context. In particular, the paper would benefit from further clarity on the evolution of finance in Holland during this period. For example, van Zanden et al. (2012, p. 16) suggest that cash holdings fell between 1462 and 1563, but due to investment in other financial asset alternatives. Furthermore, they comment that the capital markets were used a great deal during this period for investing savings (as well as obtaining credit) – in what would surely be a more profitable pursuit for rational Protestants as opposed to earning zero return holding cash.

Nonetheless, the interdisciplinary and natural-experiment-type approach adopted in this paper has provided inspiration for economic historians on how we can potentially use alternative methodologies to further our understanding of important questions which have previously gone unanswered. While this has been refreshing, the use of such sources demands a comprehensive understanding of historical context for accurate inference, and especially to differentiate between correlation and causation. Zuijderduijn and van Oosten have provided initial persuasive evidence pointing to a decline in saving behaviour in Holland at a time when Weber’s Protestant ethic should have been fostering thrift, but more work needs to be done to disentangle the effect of religious transition from an evolving capital market.


Anderson, Thomas B., Jeanet Bentzen, Carl-Johan Dalgaard, and Paul Sharp, “Pre‑Reformation Roots of the Protestant Ethic,” Working Paper (July 2015): http://www.econ.ku.dk/dalgaard/Work/WPs/EJpaper_and_tables_final.pdf.

Becker, Sascha O., and Ludger Woessmann, “Was Weber Wrong? A Human Capital Theory of Protestant Economic History,” Quarterly Journal of Economics, 124 (2009), 531–596.

Cantoni, Davide, “The Economic Effects of the Protestant Reformation: Testing the Weber Hypothesis in the German Lands,” Journal of the European Economic Association, forthcoming.

Delacroix, Jacques, and François Nielsen, “The Beloved Myth: Protestantism and the Rise of Industrial Capitalism in Nineteenth-Century Europe,” Social Forces, 80 (2001), 509–553.

Renneboog, Luc, and Christophe Spaenjers, “Religion, Economic Attitudes, and Household Finance,” Oxford Economic Papers, 64 (2012), 103–127.

van Zanden, Jan L., Jaco Zuijderduijn, and Tine De Moor, “Small is Beautiful: The Efficiency of Credit Markets in the Late Medieval Holland,” European Review of Economic History, 16 (2012), 3–23.

Weber, Max, The Protestant Ethic and the Spirit of Capitalism (London, UK: Allen and Unwin, 1930).

Was Stalin’s Economic Policy the Root of Nazi Germany’s Defeat?

Was Stalin Necessary for Russia’s Economic Development?

By Anton Cheremukhin (Dallas Fed), Mikhail Golosov (Princeton), Sergei Guriev (SciencesPo), Aleh Tsyvinski (Yale)

Abstract: This paper studies structural transformation of Soviet Russia in 1928-1940 from an agrarian to an industrial economy through the lens of a two-sector neoclassical growth model. We construct a large dataset that covers Soviet Russia during 1928-1940 and Tsarist Russia during 1885-1913. We use a two-sector growth model to compute sectoral TFPs as well as distortions and wedges in the capital, labor and product markets. We find that most wedges substantially increased in 1928-1935 and then fell in 1936-1940 relative to their 1885-1913 levels, while TFP remained generally below pre-WWI trends. Under the neoclassical growth model, projections of these estimated wedges imply that Stalin’s economic policies led to welfare loss of -24 percent of consumption in 1928-1940, but a +16 percent welfare gain after 1941. A representative consumer born at the start of Stalin’s policies in 1928 experiences a reduction in welfare of -1 percent of consumption, a number that does not take into account additional costs of political repression during this time period. We provide three additional counterfactuals: comparison with Japan, comparison with the New Economic Policy (NEP), and assuming alternative post-1940 growth scenarios.

URL: http://EconPapers.repec.org/RePEc:nbr:nberwo:19425

Distributed by NEP-HIS on 2013-09-28

Review by Emanuele Felice

Until the late 1950s, the era of rapid Soviet growth and of Sputnik, the main question among Western scholars was: When would the Soviet Union catch up with and overtake the U.S.?*

As Cheremukhin et al. correctly emphasize, the subject of this paper – Soviet industrialization in the 1930s – is one of the most important in economic history, and in world history: Soviet Union was the country which played by far the biggest role in the defeat of Nazi Germany, standing almost alone against the land force of the Third Reich and its allies for most of the war and causing 87% of the total Axis’ military deaths (in sharp contrast with World War I, when the Tsarist empire was defeated by a German Reich fighting on two fronts). Emerging from World War II as a superpower, the victorious Soviet Union contributed to shape the next four decades of human history, boasting among its technological achievements the first voyage of a human being to the space. At the same time and during the Stalin regime (1922-1953), the scale of (politically caused) human suffering has had few parallels in world history. Furthermore, as early as the 1930s Stalin’s rule was one of the first totalitarian regimes capable of reaching levels of oppressiveness and manipulation over society unobserved before.

For these reasons Stalin’s Soviet Union should continue to be interrogated by systematic studies. At the core of that regime was industrialization, which aimed to be the material pillar of a new «civilization» (e.g. Kotkin, 1995). Regarding its impact over policy making in the twentieth century, Stalin’s forced industrialization was a source of inspiration for both economists and politicians throughout the world: its planned, top-down, implementation was widely considered to be a successful, though harsh, strategy by some contemporaries.

Joseph Stalin (b 1878 - 1953), Leader of the Soviet Union (1922-1953)

Joseph Stalin (b 1878 – 1953), Leader of the Soviet Union (1922-1953)

And yet, we still have relatively little macro-economic evidence about the Stalinist period. The article Cheremukhin et al. aims to partially fill this gap, by providing consistent figures, some new arguments and insightful counterfactuals. It builds upon a remarkable amount of original research. First, it provides a comprehensive and coherent reconstruction of data on output, consumption, investments, foreign trade and labour force. These figures are presented separately for the agricultural and non-agricultural sectors. Data begins in the last decades of Tsarist Russia (1885-1913) and for the the Soviet Union covers the launch of the first five-year plan until the Nazi’s invasion (1928-1940).

Secondly, Cheremukhin et al. propose and elaborate a growth model for the Russian economy in those two periods (i.e. Tsarist Russian and pre-Nazi invention Soviet Union). This is a multi-sector neoclassical model, which is modified to allow for the peculiarity of the economy under scrutiny; namely, due to the institutional frictions and policies that distorted household and firm decisions, three wedges are defined, corresponding to the intratemporal between-sector distortions in capital and labor allocations and to an intertemporal distortion, and price scissors in agricultural prices (between producers and consumers) − which may also be thought of as a fourth wedge − are also introduced for the Stalin’s period.

It may be worth adding that when connecting wedges to policies, the Cheremukhin et al. appear to be adequately aware of the historical context and of the differences between a planned economy and a free-market one: for instance, the response of the Stalinist economy to a drop in agricultural output is likely to be the opposite − because of the price scissors policy which kept producer’s agricultural prices artificially low − to the predictions of a frictionless neoclassical growth model: it will probably lead to a further reallocation of labour from agriculture to industry and services and, therefore, to an additional reduction of agricultural output; such a distortion is here acknowledged and reasonably calibrated.

 “Smoke of chimneys is the breath of Soviet Russia”, early Soviet poster promoting industrialization, 1917-1921

“Smoke of chimneys is the breath of Soviet Russia”, early Soviet poster promoting industrialization, 1917-1921

Thirdly, the paper by Cheremukhin et al. further elaborates on data and models, by providing a number of counterfactuals. Comparisons are made with the Tsarist economy by extrapolating Tsarist wedges for 1885-1913 to the 1928-1940 years. Also by comparing the performance of both economies (Tsarist and Stalinist), for the years following 1940 under the assumption that World War II never happened.

Another comparison takes place with Japan, a country similar to Russia before World War I in terms of GDP levels and growth rates. Early in the twentieth century Japan suffered similar distortions as Russia but during the interwar period Japan undertook an economic transformation which provided Cheremukhin et al. an alternative scenario to both the Tsarist and the Stalin policies (the Japanese projections are based upon previous reconstructions of the Japanese macro-economic figures, which happen to be available for the same period as for Russia, 1885-1940).

Japanese assault on the entrenched Russian forces, 1904

Japanese assault on the entrenched Russian forces, 1904

And what is probably the most intriguing counterfactual, at least in actual historical terms, is yet one more alternative scenario, constructed by assuming that Lenin’s New Economic Policy or NEP (launched in 1921 and outliving Lenin until 1927) would have continued even after 1927: such a counterfactual requires elaborating a model for the NEP economy as well, but unfortunately the lack of reliable data for the years 1921 to 1927 makes the discussion for this scenario «particularly tentative». Furthermore, it is worth mentioning that two more alternative scenarios are provided for the Stalin economy based on alternative growth rates for the years 1940 to 1960 and again under the assumption that World War II never happened; and that robustness exercises are also performed (with further details provided in the appendix).

Broadly speaking, the results are not favourable to Stalin. According to Cheremukhin et al., Stalin was not necessary for Russian industrialization − neither, it could be consequently argued, to the defeat of Nazism and to the Russia’s rise to a superpower status. Actually, by 1940 the Tsarist economy would probably have reached levels of production and a structure of the economy similar to the Stalinist one, but which far less short-term human costs. This result may not be irreconcilable to Gerschenkron’s (1962) theses about substitute factor − in Russia this was the State, already exerting such a role in late Tzarist times − and the advantages of backwardness: these latter would have permitted to backward Russia, once its industrialization had been set in motion at the end of the nineteenth century, to see its distance to the industrialized West reduced by the time of World War II more than in World War I, in any case – that is, also under the Tzarist regime. It does contrast, however, with other findings from pioneering cliometric articles on the issue, such as the one by Robert Allen published almost twenty years ago, according to which Stalin’s planned system brought about rapid industrialization and even a significant increase of the standard of living (Allen, 1998). Similarly, but from a different perspective, long-run reconstructions of Soviet labour productivity tend to emphasize as a problem the slow-down in the period following post World War II, rather than the performance the 1930s (Harrison, 1998) – both Allen and Harrison are cited in this paper, but not these specific articles.

The Dnieper Hydroelectric Station under construction, South-Eastern Ukraine (the work was begun in 1927 and inaugurated in 1932)

The Dnieper Hydroelectric Station under construction, South-Eastern Ukraine (the work was begun in 1927 and inaugurated in 1932)

Now, at the core of the results by Cheremukhin et al. is the finding that, according to their estimates, total factor productivity of the USSR in the non-agricultural sector did not grow from 1928 to 1940. Maybe it is worth discussing this point in a little more detail. Is such a finding plausible? At a first sight it seems puzzling, given the technological advance of that period especially in the heavy sectors. And yet, at a closer inspection it may turn out to be entirely logical: the growth of output was a consequence of massive inflows of inputs, both machinery (capital) and labour. But all considered these were not used in a more efficient way.

In the model by Cheremukhin et al., capital and labour are computed through a Cobb-Douglas production function, with constant elasticity coefficients for labour and capital (0.7 and 0.3 respectively in the non-agricultural sector; 0.55 and 0.14 in the agricultural one, thus assuming a land’s elasticity of 0.31). The authors make a point that the new labour force entering the non-agricultural sector was largely unskilled and, often, was not even usefully employed. Actually exceeding the real needs of that sector: this politically induced distortion could hardly have increased TFP (although, under different assumptions, it could be alternatively modeled through a decreasing elasticity of labour: but the results in terms of total output would not change). This may also explain the good performance of Soviet Union during World War II, when due to manpower shortage the exceeding labour force finally could be profitably employed. The capital stock is calculated by the authors at 1937 prices, for the years 1928-1940.

Anti-Nazi propaganda poster, 1945

Anti-Nazi propaganda poster, 1945

We do not have enough information in order to judge whether a bias can be caused by the use of constant prices based on a late-year of the series. But this possible bias should lead to an underestimation of capital growth in that period  − given that quantities are probably weighted with relative prices lower in 1937 for the heavy sectors, than in 1928 − which would then produce an overestimation in the TFP growth proposed by the authors: in actual terms, therefore, the growth of TFP may be even lower than what estimated; in more general terms – and although caution is warranted for the lack of detailed figures – their results look realistic in this respect.

The most interesting finding, however, is the one relative to the NEP counterfactual. It is the most interesting because, in genuine historical terms, the Tzarist model was no longer a viable option to Stalin, while NEP’s strategy was. But of course, data for the NEP years are much more precarious and thus this counterfactual can only be a particularly tentative one. Nonetheless, the authors build two scenarios for the NEP policy: a lower-bound one, where a growth rate of TFP in manufacturing after 1928 similar to the average Tsarist 0.5% is tested; and an upper-bound one, with a growth rate of 2% similar to the one experienced by Japan in the same interwar period. In the first scenario the results for the Soviet economy would have been slightly worse, but in the second one much better. Given that the two scenarios correspond to the boundaries of the possibility frontier, we may conclude that probably, under the NEP, the performance of the Soviet economy would have been better than both the one observed under the Stalin and that predictable under the Tzar. This may confirm the view that the 1920s were somehow the “golden age” of Soviet communism, as well as the favourable assessment of Lenin’s and later of the collective Soviet leadership in that decade (although, admittedly, Lenin intended the NEP only as a temporary policy). After all, a more inclusive leadership – as opposed to the harshness of Stalinist autocracy in the 1930s, as well as to Hitler despotic conduct of war since the winter of 1941 – was also the one which helped the Red Army to win World War II.

“The victory of socialism in the USSR is guaranteed”, 1932

“The victory of socialism in the USSR is guaranteed”, 1932


Allen,  Robert C., Capital accumulation, the soft budget constraint and Soviet industrialization, in «European Review of Economic History», 1998, 2(1), pp. 1-24.

Gerschenkron, Alexander, Economic backwardness in historical perspective, Cambridge, Mass., The Belknap Press of Harvard University Press, 1962.

Harrison, Mark, Trends in Soviet Labour Productivity, 1928−85: War, postwar recovery, and slowdown, in «European Review of Economic History», 1998, 2(2), pp. 171-200.

Kotkin, Stephen, Magnetic Mountain: Stalinism as a Civilization, University of California Press, Berkeley, Los Angeles, and London, 1995.

Source of quote:
Gur Ofer (1987) “Soviet Economic Growth: 1928-1985,” Journal of Economic Literature, Vol. 25, No. 4, pp. 1767-1833 (cited in this paper, p. 2).

Whither Labor-Intensive Industrialization?

How Did Japan Catch-up On The West? A Sectoral Analysis Of Anglo-Japanese Productivity Differences, 1885-2000

By Stephen Broadberry (London School of Economics), Kyoji Fukao (Hitotsubashi University), and Nick Zammit (University of Warwick)

Abstract: Although Japanese economic growth after the Meiji Restoration is often characterised as a gradual process of trend acceleration, comparison with the United States suggests that catching-up only really started after 1950, due to the unusually dynamic performance of the US economy before 1950. A comparison with the United Kingdom, still the world productivity leader in 1868, reveals an earlier period of Japanese catching up between the 1890s and the 1920s, with a pause between the 1920s and the 1940s. Furthermore, this earlier process of catching up was driven by the dynamic productivity performance of Japanese manufacturing, which is also obscured by a comparison with the United States. Japan overtook the UK as a major exporter of manufactured goods not simply by catching-up in labour productivity terms, but by holding the growth of real wages below the growth of labour productivity so as to enjoy a unit labour cost advantage. Accounting for levels differences in labour productivity between Japan and the United Kingdom reveals an important role for capital in the catching-up process, casting doubt on the characterisation of Japan as following a distinctive Asian path of labour intensive industrialisation.

URL: http://d.repec.org/n?u=RePEc:cge:wacage:231&r=his

Distributed by NEP-HIS on 2015-5-30

Reviewed by Joyman Lee

Broadberry, Fukao, and Zammit focus our attention on productivity comparisons between the UK and Japan, departing from existing works on U.S.-Japan comparisons.

Broadberry, Fukao, and Zammit focus our attention on productivity comparisons between the UK and Japan, departing from existing works on U.S.-Japan comparisons.


Broadberry, Fukao, and Zammit argue that previous authors such as Pilat’s reliance on a U.S.-Japan comparison to measure Japan’s productivity has greatly distorted our periodization of Japan’s economic growth (Pilat 1994). This was partly because like Japan, the U.S. grew very quickly between 1870 and 1950, and the effects of the Great Depression in the U.S. also blunted our perception of the relative stagnation of the Japanese economy between 1920 and 1950. By comparing the Japanese data with that of the UK, Broadberry, Fukao, and Zammit show that Japanese catch-up began in the late nineteenth century during the Meiji period, and stagnated in the interwar period before resuming again after the Second World War.

In contrast to Pilat, the authors find that manufacturing played an important role in Japanese growth not only after but also before the Second World War. Whereas strong U.S. improvements in manufacturing (the U.S. itself was undergoing catch-up growth vis-à-vis the UK) might have obscured our view of Japanese performance in these areas, comparison with the UK reveals that Japanese manufacturing performed strongly until 1920. In terms of methodology, Broadberry, Fukao, and Zammit emphasize their use of more than one benchmark for time series projections to provide cross checks, and they selected 1935 and 1997 as benchmarks.

One of the most intriguing aspects of the paper is the suggestion that capital played a crucial role in Japan’s experience of catch-up growth. The authors challenge the growing view among economic historians that Asia pursued a distinctive path of economic growth, based on a pre-modern “industrious revolution” (Hayami 1967) and labor intensive industrialization (Austin & Sugihara 2013) in the modern period. Broadberry, Fukao, and Zammit’s data (table 12) shows that across our period, Japan caught up with the UK not only in terms of labor productivity but also capital intensity. Crucially, “by 1979, capital per employee was higher in Japan than in the United Kingdom” (p17). The authors explain this phenomenon by observing that “capital deepening played an important role in explaining labour productivity growth in both countries, but in Japan, the contribution of capital deepening exceeded the contribution of improving efficiency in three of the five periods” (p18). Contrary to the view put forward by those in favor of labor-intensive industrialization, the authors argue, “Japan would not have caught up without increasing [capital] intensity to western levels” (p19).

The authors contend that capital played as important a role as labor in shaping Japan's productivity growth.

The authors contend that capital played as important a role as labor in shaping Japan’s productivity growth.


This paper provides a valuable quantitative contribution to our knowledge of labor productivity in two countries that are highly important in studies on global economic history. The greater intensity of Japan’s external relations with the U.S. in the period after the Second World War has led to scholars’ greater interest in comparisons with the U.S., whereas as Broadberry, Fukao, and Zammit point out, the UK remains one of the main yardsticks in terms of productivity before the Second World War. In this respect, a comparison with the European experience is valuable, and offers a good quantitative basis for illustrating the character of Japan’s industrialization efforts in the period before the Second World War. The conclusion that manufacturing played a key role in Japan’s catch-up growth vis-à-vis the UK is consistent with the historical literature that has foregrounded manufacturing, and in particular exports to Asia, as the main driver of pre-WW2 Japanese economic growth.

What is more surprising in this paper, however, is the authors’ contention that capital was the primary factor in Japan’s productivity growth. The authors note that until 1970 Japan enjoyed lower unit labor costs vis-à-vis Britain largely because real wages were artificially repressed beneath the level of labor productivity. It was in the 1970s when Japan started seeing increases in real wages, and as a result its labor cost advantage disappeared until faster real wage growth in the UK in the 1990s (p15). In other words, the authors suggest that Japan’s export success was due not so much to improvements in labor productivity as it was to artificially low labor costs. While Japanese labor productivity growth was not exceptional except between 1950 and 1973, the contribution of capital deepening in Japan (2.29% and 1.32% for 1950-73 and 1973-90, as opposed to 0.67% and 0.58% for the UK; table 13) was on the whole greater or at least as much as that of the UK.

While few commentators would dispute the importance of capital in driving economic growth, it is unclear whether the data presented here sustains the conclusion that Japan did not follow a distinctive path of labor-intensive industrialization. The authors cite Allen’s paper on technology and global economic development (Allen 2012) to support their claim that western levels of capital intensity were necessary for productivity-driven growth that is characteristic of advanced industrial economies. While that latter point is well taken, aggregate measures of “capital intensity” do not on their own reflect the types of industries where capital (and other resources) is invested, or the manner in which labor is deployed either to create growth or to generate employment for reasons of political choice or social stability. In fact, proponents of the labor-intensive industrialization argument acknowledge that post-WW2 Japan witnessed a step-change in its synthesis of the labor and capital-intensive paths of industrialization, at the same time that Japanese industries often opted for relatively labor-intensive sectors within the spectrum of capital-intensive industries, such as consumer electronics as opposed to military, aerospace, and petro-chemical sectors (e.g. Austin & Sugihara 2013, p43-46).

Labor-intensive industrialization does not itself preclude high levels of capital investment, e.g. consumer electronics, which employs a great number of individual workers.

Labor-intensive industrialization does not itself preclude high levels of capital investment, for example consumer electronics, which employs great numbers of individual workers.

The key arguments in labor-intensive industrialization are not the role of capital per se, but the constraints imposed by initial factor endowments (e.g. large populations) and the transferability of the model through national industrial policies and intra-Asian flows of ideas and institutions. Broadberry, Fukao, and Zammit do not challenge these core ideas in the model, and confine their critiques to labeling Japan’s technological policy breakthroughs as changes in “flexible production technology” (p. 19). Doing so ignores the basic fact that the balance between population and resources in Japan has little similarity to that in the West, either at the eve of the Industrial Revolution or in the present day. In other words, there is little inherent contradiction between the need for capital accumulation and the selection of industries that make better use of the capital and technology (e.g. “appropriate technology”, Atkinson & Stiglitz 1969 and Basu & Weil 1998).

Finally, it seems to me that basing a critique primarily on a comparative study of the advanced economies of the UK and Japan misses a broader point that labor-intensive industrialization is as much about exploring paths that have been overlooked or inadequately theorized because of our simplistic insistence on “convergence” in economic growth. From this angle, foregrounding the subtle but profound differences between successful models of economic development, e.g. the experience of Japan in East Asia, and dominant Western models seems to be at least as valuable as attempts to reproduce the “convergence” argument.

Additional References

Allen, R 2012. “Technology and the Great Divergence: Global Economic Development since 1820,” Explorations in Economic History, vol. 49, pp. 1-16.

Atkinson, A & Stiglitz, J 1969. “A New View of Technological Change,” Economic Journal, vol. 79, no. 315, pp. 573-78.

Austin, G. & Sugihara, K (eds.) 2013. Labour-Intensive Industrialization in Global History. Abingdon, Oxon.: Routledge.

Basu, S & Weil, D, 1998, “Appropriate Technology and Growth,” The Quarterly Journal of Economics, vol. 113, no. 4, p. 1025-54.

Hayami, A, 1967. “Keizai shakai no seiretsu to sono tokushitsu” (The formation of economic society and its characteristics”) in Atarashii Edo Jidai shizō o motomete, ed. Shakai Keizaishi Gakkai. Tokyo: Tōyō Keizai Shinpōsha.

Pilat, D 1994. The Economics of Rapid Growth: The Experience of Japan and Korea. Cheltenham, Glos.: Edward Elgar Publishing.

Who Pays the Bills?

Sovereign Debt Guarantees and Default: Lessons from the UK and Ireland, 1920-1938

By Nathan Foley-Fisher (Federal Reserve Board) and Eoin McLaughlin (St. Andrews)

Abstract We study the daily yields on Irish land bonds listed on the Dublin Stock Exchange during the years 1920-1938. We exploit structural differences in bonds guaranteed by the UK and Irish governments to find Irish events that had long term effects on the credibility of government guarantees. We document two major events: The Anglo-Irish Treaty of 1921 and Ireland’s default on intergovernmental payments in 1932. We discuss the political and economic forces behind the Irish and UK governments’ decisions. Our finding has implications for modern-day proposals to issue jointly-guaranteed sovereign debt.

URL: http://EconPapers.repec.org/RePEc:sss:wpaper:2015-11

Distributed by NEP-HIS on: 2015-05-16

Review by Sarah Charity (Queen’s University of Belfast)

This working paper abets us in shedding some light on the financial implications for current economic events. One that dominated the headlines most prominently was Scotland’s decision to vote on independence in their recent referendum. Hard-line fiscal policy makers held their breath while questions simmered such as; would an independent Scotland continue debt repayments to Britain? What if they defaulted on these payments? The authors investigate how public debt in Ireland was dealt with during their severance from the political state of the United Kingdom in the early 20th century, focusing on the implementation of UK- and Irish-backed land bonds over this period of significant Irish land reform, when ownership was transferred from landlords to tenants. From this episode in Irish history, we can draw comparisons and learn lessons for today.


Foley-Fisher and McLaughlin look to previous studies in which yield spreads between UK and Irish government bonds are analysed, such as that of Nevin (1963) and Ó Gráda (1994). They contribute to the existing literature by concentrating on land bonds. They base their methodology on the idea of particular structural breaks occurring in their time-series data. They build on the discoveries of Willard et al. (1996) who defined breaks as a change in the intercept of the time series – ‘a shift in the mean’ (p.11-12), while Zussman et al. (2008) also searched for ‘breakpoints’ (p.4) in their methodology. They analyse the shifts in ‘perceived value of sovereign guarantees’ (p.11) by looking for changes in the mean of the yield spreads.

The impact of bond spreads ‘diffused’ (p.15) around three significant events. The first break coincides with the Anglo Irish Treaty passing in 1921 while the second occurred a decade later when the default by Ireland on land bond payments beckoned. Around this time, Eamon De Valera, founder of Irish political party Fianna Fáil, announced the ‘Free State would not honour the bi-annual payments due under various financial agreements between Ireland and the UK’ (p.14). The final break came at the end of the sample period, however this is discounted as irrelevant due to trade war negotiations being ‘unlikely…(to be) sufficient’ (p.15).

Parliamentary acts sanctioned the used of generous government guaranteed land bonds to finance state mortgages, rather similar to the volatile mortgage backed securities of the recent credit crisis, allowing farmers to borrow significant amounts of credit at lower rates. The idea was to curtail the Irish Nationalists, however it proved unsuccessful and the ‘hardline republicans’ (p.6) received independence through the signing of the Treaty in December 1921. A new dawn was on the horizon, bringing with it the promise of a new government – but it was lamentably overshadowed by the onset of the Irish Civil War. The newly established Free State was released from its obligation towards UK public debt in return for permanent partition; however it agreed to maintain annuity payments along with issuance of more land bonds.

The authors calculate the credibility of UK guarantees, otherwise known as sovereign risk, post-independence using yield spreads controlling for risks of inflation and exchange rate alternations. They acknowledge other scholars in assessing the importance of credibility in economic features such as the ‘cost of government finance’ (p.5) as investigated by Flandreau & Zumer (2009) and the estimated behaviour of the government as a ‘counterparty in other contracts’ (p.5) as seen in Cole & Kehoe (1998). Foley-Fisher and McLaughlin found the spread over UK government bonds to be 60 basis points indicating a low credit risk for the UK- and Irish-backed land bonds. Through their estimations they suggested that the increased yield spread of the land bonds during the ‘benchmark’ years from 1921 to 1932 was highly significant. After Ireland defaulted, the land bonds were no longer considered risky and the spread on UK-backed land bonds returned to zero. The authors are perhaps slightly restricted by their sample period. Mercille (2006, p.3) tells us little research exists on the significant long-term costs related to yield spreads, forcing us to seek answers elsewhere.


Foley-Fisher and McLaughlin suggest that the cost of Ireland’s default was greater for their British counterparts. They give reasons for the UK’s intervention such as the insignificant cost to the UK Treasury, who continued making interest repayments to ensure bond holders remained intact; UK war loan negotiations and the fact land bonds were mostly held in the UK ensured Whitehall was an interested party. The authors provide us with a contrasting government reaction to default in another commonwealth country, using the contemporaneous case of Newfoundland, whose debt profile bore echoes of Ireland’s. As previously mentioned, the cost for the UK government of bondholders’ losses through passing on Ireland’s default far outweighed the benefits. In the Newfoundland example, the UK government’s reaction was to withdraw from the imminent burden of financial instability and force confederation of the Dominion with Canada, thus shedding the burden for bondholders’ losses – a consequence independence-seeking Scotland may have wanted to consider.

In the absence of case studies, Ireland’s historical sovereign break up ensures Foley-Fisher and McLaughlin’s ‘simple empirical strategy’ (C.R, 2014) is applicable to and useful in multiple current financial situations whether it be the aforementioned Scottish referendum, or the pending disintegration of Catalonia from Spain. Moodys (2014) discovered that 75% of the 17 country break ups which have occurred since 1983 resulted in sovereign default by the preceding or the new state – albeit these implications ‘cannot be easily applied’ (p.3) to more recent breakups, paving the way for this exploration of Ireland as the model to follow.

This paper describes apportioning fiscal liabilities as ‘complex’ and provides advice for states in the process of seeking dissolution – uncertainty is persistent. Debt must be paid and it may be guaranteed by the Treasury of the former union in the wake of default, but the ambiguity in the outcome remains. According to the blogger C.R., writing in The Economist (2014), it seems as if partition is more straightforward politically rather than financially or economically. From what Foley-Fisher and McLaughlin have taught us in their empirical study, the cost of default and fiscal uncertainty lingers long after secession. In conclusion, the exploits of our Irish ancestors from the previous century are what we, alongside other state governments, must contemplate when the sword of political state break-up strikes again.


Cole, H. L. & Kehoe, P. J. (1998), ‘Models of Sovereign Debt: Partial versus General Reputations’, International Economic Review 39(1), 55–70.

C.R (Feb. 21st 2014) The economics of Scottish Independence- a messy divorce, Blighty Britain Available at: http://www.economist.com/blogs/blighty/2014/02/economics-scottish-independence (Accessed: March 22nd 2015)

Ferguson, N. (2006), ‘Political risk and the international bond market between the 1848 revolution and the outbreak of the First World War’, Economic History Review 59, 70– 112

Flandreau, M. & Zumer, F. (2009), The Making of Global Finance 1880-1913, Paris: OECD Publishing.

Hancock, W. (1964), Survey of British Commonwealth Affairs. Volume I Problems of Nationality 1918-1936, Oxford: Oxford University Press.

Mauro, P., Sussman, N. & Yafeh, Y. (2006), Emerging Markets and Financial Globalization, Oxford: Oxford University Press.

Mercille, J. (2006) ‘The Media and the Question of Sovereign Debt Default in the European Economic Crisis: The Case of Ireland’, University of Sheffield, Available at: http://speri.dept.shef.ac.uk/wp-content/uploads/2013/06/Mercille-J-The-Media-the-Question-of-Sovereign-debt-default-in-the-European-Economic-Crisis-the-case-of-Ireland.pdf (Accessed August 18, 2015).

Moodys (May 21st. 2014) When countries broke up, sovereign default risk spiked, Available at: https://www.moodys.com/research/Moodys-When-countries-broke-up-sovereign-default-risk-spiked–PR_299968?WT.mc_id=NLTITLE_YYYYMMDD_PR_299968 (Accessed: March 23rd. 2015).

Nevin, E. (1963), ‘The Capital Stock of Irish Industry’, Economic and Social Research Institute(ESRI) paper No. 17, Dublin. Available at: http://econpapers.repec.org/bookchap/esrresser/grs17.htm (Accessed August 18, 2015).

Ó Gráda, C. (1994), Ireland: A new Economic History 1780-1939, Clarendon Press, Oxford.

Willard, K. L., Guinnane, T. W. & Rosen, H. S. (1996), ‘Turning points in the Civil War: views from the Greenback market’, American Economic Review 86, 1001–1018.

Zussman, A., Zussman, N. & Nielson, M. O. (2008), ‘Asset Market Prespectives on the Israeli-Palestinian conflict’, Economica 75, 84–115.

By failing to prepare, you are preparing to fail

The European Crisis in the Context of the History of Previous Financial Crisis

by Michael Bordo & Harold James

Abstract – There are some striking similarities between the pre 1914 gold standard and EMU today. Both arrangements are based on fixed exchange rates, monetary and fiscal orthodoxy. Each regime gave easy access by financially underdeveloped peripheral countries to capital from the core countries. But the gold standard was a contingent rule—in the case of an emergency like a major war or a serious financial crisis –a country could temporarily devalue its currency. The EMU has no such safety valve. Capital flows in both regimes fuelled asset price booms via the banking system ending in major crises in the peripheral countries. But not having the escape clause has meant that present day Greece and other peripheral European countries have suffered much greater economic harm than did Argentina in the Baring Crisis of 1890.

URL: http://EconPapers.repec.org/RePEc:bog:spaper:18

Circulated by NEP-HIS on: 2015-01-26

Reviewed by: Stephen Billington (Queen’s University of Belfast)


In this paper Bordo and James seek to analyse the impact of the financial crisis of 2007-8 in the context of previous crisis. Specifically by comparing the experience of periphery countries of the Eurozone with those of the “classic” Gold Standard.


In their paper Bordo and James give a synopsis of the similarities which emerged between both monetary regimes. By adhering to a gold parity there was an expansion in the banking system, through large capital inflows, which was underpinned by a strong effective state to allow for greater borrowing. A nation was effective if it held an international diplomatic commitment, which in turn required them to sign into international systems, all the while this played into the hands of radical political parties who played on civilian nationalism[1]; these events combined lead to great inflows of capital into peripheral countries which inevitably led to fiscal instability and a resulting crisis. Similar dilemmas occurred within the EMU, but much more intensely.


This brings me to the main point that the authors emphasize, that of the contingency rule of the classic gold standard. The latter allowed member countries a “safety valve for fiscal policy”. Essentially this was an escape clause that permitted a country to temporarily devalue its currency in an emergency, such as the outbreak of war or a financial crisis, but would return to normalcy soon after, that is, they would return to previous levels. Bordo and James’ argument is that this lack of a contingency within the EMU allowed for a more severe financial crisis to afflict the periphery countries (Greece, Ireland and Portugal) than had affected gold standard peripheries (Argentina, Italy and Australia) as modern day EMU countries did (and do) not have to option to devalue their currency.


Bordo and James point out that crisis during the gold standard were very sharp, but did not last as long as the 2007-8 crisis. This because the exclusion clause during the gold standard enabled a “breathing space” and as a result most countries were back to growth within a few short years. The EMU story is quite different, say Bordo and James. Mundell (1961) argued that a successful monetary union requires the existence of a well-functioning mechanism for adjustment, what we see in the EMU are a case of worse dilemmas due primarily to this absence of an escape clause.

“Gold outflows, and, with money and credit growth tied to gold, lower money and credit growth. The lower money and credit growth would cause prices and wages to fall (or would lead to reductions in the growth rates of prices and wages), helping to restore competitiveness, thus eliminating the external deficits”

The above quote provided by Gibson, Palivos and Tavlas (2014) highlights how the gold standard allowed a country to adjust to a deficit. This point reinforces how Bordo & James argue that due to the constricting nature of the EMU there is no “safety valve” to allowed EU countries to release the steam from increasing debt levels. With respect to the Argentine Baring Crisis of 1890, while the crisis was very sharp in terms of real GDP, pre-crisis levels of GDP were again reached by 1893 – clearly a contrast with the Euro as some countries are still in recession with very little progress having been made as suggested by the following headline: “Greece’s current GDP is stuck in ancient Greece” – Business Insider (2013).

The following graph highlights the issue that in Europe most countries are still lagging behind the pre-crisis levels of GDP.


Bordo and James clearly support this argument. Delles and Tavlas (2013) also argue that the adjustment mechanism of core and periphery countries limited the size and persistence of external deficits. They put forward that the durability of the gold standard relied on this mechanism. This is reinforced by Bloomfield (1959) who states it “facilitated adjustments to balance of payments disequilibrium”.

Vinals (1996) further supports the authors’ sentiments by arguing that the Treaty of Maastricht restricts an individual member’s room to manoeuvre as the Treaty requires sound fiscal policies, with debt limited to 60% of GDP and annual deficits no greater than 3% of GDP – meaning a member cannot smooth over these imbalances through spending or taxation.

Gibson, Palivos and Tavlas (2014) state “a major cost of monetary unions is the reduced flexibility to adjust to asymmetric shocks”. They argue that internal devaluations must occur to adjust to fiscal imbalances, but go on to argue that these are much harder to implement than in theory, again supported by Vinals (1996).


Bordo and James focus primarily on three EU periphery countries which are doing badly, namely Greece, Ireland and Portugal. However they neglect the remaining countries within the EU which can also be classed as a periphery. According the Wallerstein (1974) the periphery can be seen as the less developed countries, these could include further countries such as those from eastern Europe[2]. By looking at a more expansive view of peripheral countries we can see that these other peripherals had quick recoveries with sharp decreases in GDP growth, as in the case of the Gold standard countries, but swiftly recovered to high levels of growth again while the main peripheral countries the authors analyse do lag behind.

Untitled2See note 3

Bordo and James do provide a strong insight into the relationship between an adjustment mechanism to combating fiscal imbalances as a means of explaining the poor recovery of certain peripheral countries (i.e. Greece, Ireland, Portugal) and highlight the implications of this in the future of the EMU. If the EMU cannot find a contingency rule as the gold standard then recessions may leave them as vulnerable in the future as they are now.


1) This process can be thought of as a trilemma, Obstfeld, Taylor and Shambaugh (2004) give a better explanation. In the EU the problem was intensified as governments could back higher levels of debt, and there was no provision for European banking supervision, the commitment to EU integration let markets believe that there were no limits to debt levels. This led to inflows in periphery countries where banks could become too big to be rescued.

2) Latvia, Lithuania, Slovakia, Slovenia, and even Cyprus can be included based on low GDP per capita which is equivalent to Greece.

3) Data taken from Eurostat comparing real GDP growth levels of lesser developed countries within the Eurozone who all use the euro and would be locked into the same system of no adjustment.


Bloomfield, A. (1959) Monetary Policy under the International Gold Standard. New York: Federal Reserve Bank of New York.

Business Insider (2013). Every Country in Europe Should be Glad it’s Not Greece. http://www.businessinsider.com/european-gdp-since-pre-crisis-chart-2013-8?IR=T [Accessed 19/03/2015].

Eurostat, Real GDP Growth Rates http://ec.europa.eu/eurostat/tgm/graph.do?tab=graph&plugin=1&pcode=tec00115&language=en&toolbox=data [Accessed 21/03/2015].

Dellas, Harris; Tavlas, George S. (2013). The Gold Standard, The Euro, and The Origins of the Greek Sovereign Debt Crisis. Cato Journal 33(3): 491-520.

Gibson, Heather D; Palivos, Theodore; Tavlas, George S. (2014). The Crisis in the Euro Area: An Analytic Overview.Journal of Macroeconomics 39: 233-239.

Mundell, Robert A. (1961). A Theory of Optimum Currency Areas. The American Economic Review 51(4): 657-665.

Obstfeld, Maurice. Taylor, Alan. Shambaugh, Jay C. (2004). The Trilemma in History: Trade-Offs among Exchange Rates, Monetary Policies and Capital Mobility. National Bureau of Economic Research (NBER working paper 10396).

Vinals, Jose. (1996). European Monetary Integration: A Narrow or Wide EMU?. European Economic Review 40(3-5): 1103-1109.

Wallerstein, Immanuel (1974). The Modern World-System I: Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century. New York: Academic Press.

“Empire State of Mind”: The Land Value of Manhattan, 1950-2013

What’s Manhattan Worth? A Land Values Index from 1950 to 2013.
by Jason Barr, Rutgers University (jmbarr@rutgers.edu), Fred Smith, Davidson College (frsmith@davidson.edu), and Sayali Kulkarni, Rutgers University (sayali283@gmail.com).
Abstract: Using vacant land sales, we construct a land value index for Manhattan from 1950 to 2013. We find three major cycles (1950 to 1977, 1977 to 1993, and 1993 to 2007), with land values reaching their nadir in 1977, two years after the city’s fiscal crises. Overall, we find the average annual real growth rate to be 5.1%. Since 1993, landprices have risen quite dramatically, and much faster than population or employment growth, at an average annual rate of 15.8%, suggesting that barriers to entry in real estate development are causing prices to rise faster than other measures of local well-being. Further, we estimate the entire amount of developable land onManhattan to be worth approximately $825 billion. This would suggest an average annual return of 6.3% since the island was first inhabited by Dutch settlers in 1626.

URL: http://d.repec.org/n?u=RePEc:run:wpaper:2015-002&r=his [this link will download a Word copy of the paper to your computer]

Review by Manuel A. Bautista González (Columbia University)

“All cultures have their creation myths, and according to cherished New York legend, the Manhattan real estate market was born when the Dutch paid $24 in shells to the Indians for an island that is today worth billions of dollars. The persistence of this story tells us more about the justifying strategies of our own times than about the past. The Manhattan real estate market, the myth implies, is as natural as its bedrock and harbor, and real estate magnates who today pursue “the art of the deal” are only fulfilling their forefathers’ vision of the profits embedded in Manhattan land. The conditions of Manhattan’s land and housing markets, far from being part of the natural order of things, are rooted in a social history. It is, after all, people who organize, use, an allocate the benefits of natural and social resources, and the value they assign to land depends on the larger set of social relations that organize property rights and labor. […] How did land become “scarce”? […] Who profited, who lost, and what difference did the flow of rents make to New Yorkers’ understanding of their social responsibilities within a shared landscape […] The myth that celebrates a real estate deal as New York’s primal historical act lends an aura of inevitability to the real estate market’s power to shape the city landscape and determine the physical conditions of everyday life. New Yorkers today live in the shadows of deals that have produced the glitter of Trump Tower, the polished facades of renovated brownstones, and the shells of abandoned buildings. These shadows especially darken the paths of those who are getting a bad deal: the more than 50,000 people who have been displaced onto the city’s streets and sleep in doorways, subway stations, railroad terminals, or temporary shelters. Contemporary politicians invoke this landscape of light and shadows to point to the contradictions of of our time: a city that can indulge extravagant displays of wealth cannot afford to house its people.”

(Blackmar 1989: 1, 12)


Debates over the high market value of real estate and the unaffordable price of housing occupy a constant place in the public spheres of the major cities of the world, especially in those where land is already a very scarce factor and vertical, intensive urban development takes place in the form of tall buildings, towers and skyscrapers. However, historical perspectives on the real estate and housing markets are for the most part lacking in these discussions. In their paper, Jason Barr, Fred Smith and Sayali Kulkarni attempt to fill the gap for New York City, a capital of capital like no other, to borrow from the title of the book by financial historian Youssef Cassis (Cassis 2007). In their work, published in NEP-HIS 2015-04-11, the authors develop a land value index between 1950 and 2013 in Manhattan, the island that defines the Big Apple like no other borough does.

According to the deflated value of their index, the real value of real estate in Manhattan since 1950 has increased at an (otherwise impressive) average annual growth rate of 5.1%. The authors identify three long cycles: the first one, 1950-1977, roughly coincided with the golden era of Western capitalism, a skyscrapers boom and the demise of urban industries; the second one, 1977-1993, began with the consequences of white flight to the suburbs and fiscal crises in the city and lasted until the financial Big Bang of the late 1980s and early 1990s; and the third one, 1993-2007, manifested the impact of financialization of the U. S. economy, the rise of Manhattan as a services powerhouse, and the emergence of New York as a truly global city, in the sense proposed by the sociologist Saskia Sassen (Sassen 2001).

View of Manhattan from Top of the Rock Observation Deck, Sunday, May 20, 2012, 5:13 PM EST. A gift from the author to the loyal audience of the NEP-HIS blog.

View of Manhattan from Top of the Rock Observation Deck, Sunday, May 20, 2012, 5:13 PM EST. A gift from the author to the loyal audience of the NEP-HIS blog.

The authors go through the methodological problems of developing a land value index. The first problem has to do with whether one assesses the value of undeveloped land or whether one incorporates the constructions on it. The second problem is that of using either market prices or assessment of values for taxation purposes, for example. A third problem derives from the divergence of market prices and assessed values, a divergence which is considerable in the case of New York City during this period.

Continue reading