Category Archives: Macroeconomics

How do we eliminate wealth inequality and financial fragility?

The market turn: From social democracy to market liberalism

By Avner Offer, All Souls College, University of Oxford (

Abstract: Social democracy and market liberalism offered different solutions to the same problem: how to provide for life-cycle dependency. Social democracy makes lateral transfers from producers to dependents by means of progressive taxation. Market liberalism uses financial markets to transfer financial entitlement over time. Social democracy came up against the limits of public expenditure in the 1970s. The ‘market turn’ from social democracy to market liberalism was enabled by easy credit in the 1980s. Much of this was absorbed into homeownership, which attracted majorities of households (and voters) in the developed world. Early movers did well, but easy credit eventually drove house prices beyond the reach of younger cohorts. Debt service diminished effective demand, which instigated financial instability. Both social democracy and market liberalism are in crisis.


Distributed by NEP-HIS on: 2017-01-29

Review by: Sergio Castellanos-Gamboa, Bangor University


This paper emerged from Avner Offer’s Tawney Lecture at the Economic History Society’s annual conference, Cambridge, 3 April 2016 (the video of which can be found here).

In this paper Offer discussed two macroeconomic innovations of the 20th century, which he calls “the market turn”. These are the changes in fiscal policy and financialisation that encompassed the shift  from social democracy to market liberalism from the 1970s onwards. Social democracy is understood as a fiscal innovation which resulted in the doubling of public expenditure (from aprox. 25 to 50 per cent of GDP between 1920 and 1980). Its aim was reducing wealth inequality. Market liberalism encompassed a monetary innovation, namely the deregulation of credit which allowed households to increase their indebtedness from around 50 to 150 per cent of personal disposable income, mainly for the purpose of home ownership. According to Offer the end result of market liberalism was increasing wealth inequality. See Offer’s depiction of this process in the graph below.

Two macroeconomic financial innovations in the 20th century, UK calibration. (Note: Diffusion curves are schematic, not descriptive.)

Two macroeconomic financial innovations in the 20th century, UK calibration.
(Note: Diffusion curves are schematic, not descriptive.)

Offer considers that both social democracy and market liberalism are norms captured by the single concept of a “Just World Theory” (Offer & Söderberg, 2016).The ideals behind social democracy are said to be supported by ideas found in classical economics, while the ideals behind market liberalism are said to have emerged from a redefinition of the origins and nature of economic value found in neoclassical economics. Contrasting the ideas behind social democracy and market liberalism brings about  questions such as:

  • Where does value come from?,
  • Is it from production or is it from personal preferences and demand for the good/service?,
  • What is just and fair?,
  • What do we as individuals deserve as reward?, and
  • Is there really a trade-off between equality and efficiency?

Answering any of these question is not simple and heated debates abound around them. Offer, however, rescues the idea of life-cycle dependency, where the situation of the most vulnerable individuals is alleviated through collective risk pooling rather than financial markets. According to Offer,  life-cycle dependency was the dominant approach to reducing poverty in most developed countries until the oil crisis of the early 1970s. Then collapse of the Bretton Woods accord that followed, led to the liberalization of credit by removing previous constraints. This in turn resulted in the “market turn”.

Avner Offer

Professor Avner Offer (1944). MA, DPhil, FBA. Emeritus Fellow of All Souls College, Oxford since 2011.

Offer then turns to analyse the events after the collapse of Bretton Woods that led to the increase of household indebtedness while focusing on the UK. The 1970s was a very volatile decade for Britain.  For instance, oil price increases and the secondary banking crises of 1973 resulted in the highest annual increase of the inflation rate on record. Offer argues, while citing John Fforde (Executive Director of the Bank of England at that time), that the Competition and Credit Control Act 1971 was as a leap of faith in the pursuit of greater efficiency in financial markets. This Act was accompanied by a new monetary policy where changes in interest rates (the price of money) by the central bank was to bring about the control of the quantity of money. Perhaps unexpectedly and probably due to a lack of a better understanding of the origins of money, that was not the case. Previously lifted credit restrictions had to be reinstated.

Credit controls were again lifted in the 1980s. This time policy innovations went further by allowing clearing (ie commercial) banks to re-enter the personal mortgage market. The Building Societies Act 1986  allowed building societies to offer personal loans and current accounts as well as opened a pathway for them to become commercial banks (which many did after 1989 and all those societies that converted  either collapsed or were taken over by clearing banks or both). Initially and up to the crash of house prices in September, 1992, personal mortgage credit grew continuously and to levels never seen before in the UK. According to Offer, during this period both political parties supported the idea of homeownership and incentivised it through programs like “Help to Buy”. However, the rise in the demand for housing combined with the stagnation in the supply of dwellings pushed up house prices, making it more difficult for first-time buyers to become homeowners. Additionally, according to Offer, the wave of easy credit of the 1980s brought with it an increase in wealth inequality and an increase in the fragility of the financial system. As debt repayments grew as proportion of income, consumption was driven down, with subsequent effects on production and services. On this Offer opined:

“In the quest for economic security, the best personal strategy is to be rich.” (p. 17)

The paper ends with possible and desirable futures for public policy initiatives to deal with today’s challenges around wealth inequality and mounting personal credit. He argues that personal debt should be reduced through rising inflation,  a policy driven write-off or a combination of both. He also argues to reinstate a regime where credit is rationed. He states that financial institutions should not have the ability to create money and therefore the housing market funding should return to the old model of building societies. He has a clear preference for social democracy over market liberalism and as such argues that austerity should end, since it is having the exact opposite effects to what was intended.

Brief Comment

Offer’s thought provoking ideas comes at a time when several political and economic events are taking place (e.g. Brexit, Trump’s attack on Dodd-Frank, etc.) which, together, could be of the magnitude as “the market turn”. Once again economic historians could help better inform the debate. Citing R. H. Tawney, Offer opened the lecture (rather than the paper) by stating that:

“to be an effective advocate in the present, you need a correct and impartial understanding of the past.”

Offer clearly fulfils the latter, even though some orthodox economists might disagree with his inflationary and credit control proposals. As per usual his idea are a great contribution to the debate around market efficiency in a time when the world seems to be in constant distress. Perhaps we ought to generate more and better research to understand the mechanisms through which market liberalism generated the current levels of wealth inequality and financial instability that Offer describes. More importantly though, is analysing if social democracy can bring inequality down as it did in the past. In my view, however, in a world where productivity seems to be stagnated, real wages are decreasing, and debt keeps growing, it is highly unlikely that the public sector can produce the recipe that will set us in the path of economic prosperity for all.

Additional References

Offer, A., & Söderberg, G. (2016). The Nobel Factor: The Prize in Economics, Social Democracy, and the Market Turn. Princeton University Press.
(Read an excellent review of this book here)

The Limitations of Correcting Data with more Data

Brazilian Export Growth and Divergence in the Tropics during the Nineteenth Century

By Christopher D. Absell and Antonio Tena Junguito (both at Carlos III, Madrid).

Abstract: The objective of this article is to reappraise both the accuracy of the official export statistics and the narrative of Brazilian export growth during the period immediately following independence. We undertake an accuracy test of the official values of Brazilian export statistics and find evidence of considerable under-valuation. Once corrected, during the post-independence decades (1821-50) Brazil’s current exports represented a larger share of its economy and its constant growth is found to be more dynamic than any other period of the nineteenth century. We posit that this dynamism was related to an exogenous institutional shock in the form of British West Indies slave emancipation that afforded Brazil a competitive advantage.


Distributed by NEP-HIS on: 2015-05-22 and published under the same title in Journal of Latin American Studies (Online, April 2016)

Reviewed by Thales A. Zamberlan Pereira (University of São Paulo)

The best place to find the (rather scarce)  macroeconomic data for 19th century Brazil are the official statistics compiled by the Brazilian Statistics Institute (IBGE). The IBGE data is the main source in Brian Mitchell’s international historical statistics and both are commonly used in the literature exploring Brazilian economic history. The paper by Absell and Tena is an attempt to test the accuracy of these sources by looking at official export statistics between 1821 and 1913. If nothing else this  already makes this an interesting paper.


The focus in export data relies on the argument that the Brazilian economy remained stagnant during the decades that followed Brazil’s independence until 1850 when there was renewed economic growth. While the more recent literature suggests the development of a domestic economy before 1850, the more “classic” literature focuses on the foreign sector to calculate Brazil’s economic growth in the 19th century.

Absell and Tena confirm previous findings that official export statistics were undervaluing exports after 1850. But their study extends to the earlier period and suggests that official statistics  also had a significant bias for the first half of the 19th century. In particular their analysis suggests that Brazilian export growth before 1850 was much higher than previously assumed and that a change in international demand, especially for coffee, was the principal determinant for this growth. The last section of the paper tries to explain the sources of Brazil’s “dynamic export growth” during the post-independence decades and shows that an increase in foreign demand was much more important than changes in domestic productivity. The high rate of growth in exports between 1821 and 1850, a very interesting result, is calculated by deflating prices using an index from a new series of commodities prices.




All of Absell and Tena’s results are grounded in the price correction of the official export data and, therefore, the most interesting part of the paper is the reconstruction of Brazil’s export statistics. To correct the official data, they used international prices for the different commodities (mainly cotton, sugar, and coffee) and subtract freight rates, insurance costs, and export taxes. That is, they convert c.i.f. (cost, insurance and freight) values to f.o.b. (free on board) creating new series for these variables. For insurance and freight rates they used trade data between Rio de Janeiro and Antwerp. It should be noted, however, that a large part of cotton exports before 1850 went to Britain, and freight rates between Brazil and Liverpool were half of what they were for freight travelling to Portugal or France.

Absell and Tena argue that official data for exports was sourced in a weekly table organized “by a government committee in consultation with local commodity brokers and commercial associations.” This information was then verified by the Ministry of Finance,  who sent the tables to provincial customs houses (which calculated the tax revenue) and also to major news periodicals. If the official values were organized like this for the whole period under study, as the authors argue, it would be easier to doubt the accuracy of exports statistics. But, it is difficult to understand how a system of weekly information could work in a country the size of Brazil during the 19th century. Before 1850, northern provinces like Maranhão had stronger business relationships with Lisboa and Liverpool than with Rio de Janeiro. Some northern provinces did not support independence in 1822 because of close economic ties with Portugal.


An additional issue is that many important provinces, even after 1850, did not use the weekly table to calculate their taxes. Evidence suggests that in Minas Gerais and São Paulo, two major coffee exporters, the government used a fixed price system to calculate taxes. See, for example, debates at the provincial assembly of Rio de Janeiro, November 1862, 1879; available online. This information, of course, does not invalidate the argument about the inaccuracy of official values, but it provides some clues that the authors’ correction could have a significant bias as well.

Another problem with the transformation to f.o.b. prices regards export duties. In the working paper version of this article, they assume this “additional trade cost” represented between 1 to 7 per cent of export values. There is extensive evidence, however, that export taxes were a much higher burden throughout the 19th century. Debates at the Chamber of Deputies, the Senate, and in newspapers show that before the fiscal reform in the 1830s, export duties for sugar and cotton could reach more than 20 per cent. The export duties also varied across provinces. After 1850, they continued to be at least 10 per cent.  The export duties presented by Absell and Tena are undervalued because their source from 1821 to 1869 only show the total revenue collected by the central government, not revenue collected by provincial custom-houses. Making assumptions in such calculations is valid, but information regarding data sources should have been more clearly explained in the published version.


Because the objective of the authors is to correct export values using more accurate price data, it should be clear that they do not use only price for Brazilian commodities to adjust the official statistics. To correct the value of Brazilian cotton exports, for example, they use price information of Guyana Raw (Berbice or Demerara) and Middling Uplands (United States) to the United Kingdom. The figure below shows the price of an arroba of cotton in pennies (d) from four different sources, including two prices series for Brazil not used in Absell and Tena paper. The first is the price from the official statistics (IBGE), the second is the price of cotton at the port of Maranhão, the third is the price of cotton from Maranhão in Liverpool, and the last one in the average price of West Indies in Liverpool. As can be seen,  using prices for Brazilian cotton would change some of the magnitudes that the paper proposes.


In summary the paper by Absell and Tena makes a worthy contribution and it proposes a revisionist approach to an important source. An important problem in the paper, however, is not discussing how its own sources could limit their conclusions, a crucial aspect in any revisionist study.

Where is the growth?

Mismeasuring Long Run Growth: The Bias from Spliced National Accounts

by Leandro Prados de la Escosura (Carlos III)

Abstract: Comparisons of economic performance over space and time largely depend on how statistical evidence from national accounts and historical estimates are spliced. To allow for changes in relative prices, GDP benchmark years in national accounts are periodically replaced with new and more recent ones. Thus, a homogeneous long-run GDP series requires linking different temporal segments of national accounts. The choice of the splicing procedure may result in substantial differences in GDP levels and growth, particularly as an economy undergoes deep structural transformation. An inadequate splicing may result in a serious bias in the measurement of GDP levels and growth rates.

Alternative splicing solutions are discussed in this paper for the particular case of Spain, a fast growing country in the second half of the twentieth century. It is concluded that the usual linking procedure, retropolation, has serious flows as it tends to bias GDP levels upwards and, consequently, to underestimate growth rates, especially for developing countries experiencing structural change. An alternative interpolation procedure is proposed.


Distributed in NEP-HIS on 2015 – 01 – 09

Reviewed by Cristián Ducoing

Dealing with National Accounts (hereafter NA) is a hard; dealing with NA in the long run is even harder…..

Broadly speaking, a quick and ready comparison of economic performance for a period of sixty years or more, would typically source its data from the Maddison project. However and as with any other human endevour, this data is not free from error. Potential and actual errors in measuring economic growth is highly relevant economic history research, particularly if we want to improve its public policy impact. See for instance the (brief) discussion in Xavier Marquez’s blog around how the choice of measure can significantly under or overstate importance of Lee Kuan Yew as ruler of Singapore.

The paper by Leandro Prados de la Escosura, therefore, contributes to a growing debate around establishing which is the “best” GDP measure to ascertain economic performance in the long run (i.e. 60 or more years). For some time now Prados de la Escosura has been searching for new ways to measure economic development in the long run. This body of work is now made out of over 60 articles in peer reviewed journals, book chapters and academic books. In this paper, the latest addition to assessing welfare levels in the long run, Prados de la Escosura discusses the problems in using alternative benchmarks and issues of spliced NA in a country with a notorious structural change, Spain. The main hypothesis developed in this article is to ascertain differences that could appear in the long run NA according to the method used to splice NA benchmarks. So, the BIG question is retropolation or interpolation?

Leandro Prados de la Escosura. Source:

Leandro Prados de la Escosura. Source:

Retropolation: As Prados de la Escosura says, involves a method that is …, widely used by national accountants (and implicitly accepted in international comparisons). [T]he backward projection, or retropolation, approach, accepts the reference level provided by the most recent benchmark estimate…. In other words, the researcher accepts the current benchmark and splits it with the past series (using the variation rates of the past estimations). What is the issue here? Selecting the most recent benchmark results in a higher GDP estimate because, by its nature, this benchmark encompasses a greater number of economic activities. For instance, the ranking of relative income for the UK and France changes significantly when including estimates of prostitution and narcotrafic. This “weird” example shows how with a higher current level and using past variation rates, long-run estimates of GDP will be artificially improved in value. This approach thus can lead us to find historical anomalies such as a richer Spain overtaking France in the XIXth century (See Prados de la Escosura figure 3 below).

An alternative to the backward projection linkage is the interpolation procedure. This method accepts the levels computed directly for each benchmark year as the best possible estimates, on the grounds that they have been obtained with ”complete” information on quantities and prices in the earlier period. This procedure keeps the initial level unaltered, probably being lower than the level estimated by the retropolation approach.

There are two more recent methods to splice NA series derived from the methods described above: the “mixed splicing” proposed by Angel de la Fuente (2014), which uses a parameter to capture the severity of the initial error in the original benchmark. The problem with this solution is the arbitrary value assigned (parameter). Let’s see it graphically and using data for the Maddison project. As it is well known, these figures were recently updated by Jutta Bolt and Jan Luiten van Zanden while the database built thanks to the contributions of several scholars around the world and using a same currency (i.e. the international Geary-Kheamy dollar) to measure NA. Now, in figure 1 shows a plot of GDP per capita of France, UK, USA and Spain using data from the Madison project.

GDP per capita $G-K 1990. France, UK, USA and Spain. 1850 – 2012

The graph suggests that Spain was always poorer than France. But this could change if the chosen method to split NA is the retropolation approach. Probably we need a graph just with France to appreciate the differences. Please see figure 2:

GDP pc Ratio between Spain and France. Bolt&vanZanden (2014) with data from Prados de la Escosura (2003)

GDP pc Ratio between Spain and France. Bolt&vanZanden (2014) with data from Prados de la Escosura (2003)

Figure 2 now suggests an apparent convergence of Spain with France in the period 1957 to 2006. The average growth rate for Spain in this period was almost 3,5% p.a. and in the case of France average growth shrinks to 2,2% p.a. Anecdotal observation as well as documented evidence around Spainish levels of inequality and poverty make this result hard to believe. Prados de la Escosura goes on to help us ascertain this differences in measurement graphically by brining together estimates of retropolation and interpolation approaches in a single graph (see figure 3 below):

Figure 3. Spain’s Comparative Real Per Capita GDP with Alternative Linear Splicing (2011 EKS $) (logs).

Figure 3. Spain’s Comparative Real Per Capita GDP with Alternative Linear Splicing (2011 EKS $) (logs).

In summary, this paper by Prados de la Escosura is a great contribution to the debate on long run economic performance. It poises interesting challenges scholars researching long-term growth and dealing with NA and international comparisons. The benchmarks and split between different sources is always a source of problems to international comparative studies but also to long-term study of the same country. Moving beyond the technical implications discussed by Prados de la Escosura in this paper, economic history research could benefit from a debate to look for alternative measures or proxies for long-run growth, because GDP as the main source of international comparisons is becoming “dated” and ineffective to deal with new research in inequality, genuine savings Genuine Savings, energy consumption, complexity and gaps between development and developed countries to name but a few.


Bolt, J. and J. L. van Zanden (2014). The Maddison Project: collaborative research on historical national accounts. The Economic History Review, 67 (3): 627–651.

Prados de la Escosura, Leandro  (2003) El progreso económico de España (1850-2000). Madrid, Fundación BBVA, , 762 pp.


1) This paper by Prados de la Escosura has already been published in Cliometrica and with the same title

2) Prados de la Escosura’s A new historical database on economic freedom in OECD countries | VOX, CEPR’s Policy Portal.

The Trespassing Thinker: Albert #Hirschman & #economic #development

The working paper used to source this post was found to have


its contents from work by Michele Alacevich (Loyola) and Ana Maria Bianchi (Sao Paolo). Further details are to be found here:

RePEc plagiarism accused offender: Pier Giorgio Ardeni

This finding does not demerit Beatriz Rodríguez-Satizábal’s review below. But do bear in mind that any reference to Ardeni should in fact read as pointing to the ideas of Alacevich and Bianchi.

– Bernardo Batiz-Lazo, General Editor NEPHIS (2015-01-12).

Being a Consultant “Expert” in a Developing Country: the Legacy and Lessons of Albert Hirschman

By Pier Giorgio Ardeni, Department of Economics, University of Bologna



After more than half a century, the reflections of Albert O. Hirschman on development assistance, the role of consultant “experts” in providing policy advice and the “visiting economist’s syndrome” are still very current. In as much as Hirschman argued against all-encompassing policy frameworks, overall development plans and universal models, “one-size-fits-all” models abstracting from the local, historical, geographic and institutional conditions have remained the prevailing modus operandi of international development agencies and governments in development assistance. In spite of Paul Krugman’s criticism of Hirschman’s lack of a mathematically-consistent approach in favour of an ad hoc pragmatism, Hirschman’s avoidance of assuming a toy model to deal with practical issues and the specificities of development problems in different countries – while still using rigorous and detailed analysis– appears to be a promising attitude of enormous relevance even today. If the rejection of large-scale models of the hey days of development theory was due to the neoliberal policy wave that led to the “Washington consensus” – more market and less State –, development assistance has remained firmly entrenched in the principles of balanced growth, all-encompassing liberalizing policy reforms and diffused marketization with an increasingly limited role for the State. Development assistance approaches have maintained a standard list of prescriptions, policy-reform recipes for all sectors, social, institutional and even political objectives, under the justification that “everything depends on everything”. In this paper, I briefly review the evidence regarding the active pursuit of a paradigm that, sidelining Hirschman’s unorthodox approach, has confirmed that we have “forgotten nothing and learned nothing”, as Hirschman once said. While Hirschmanian concepts like “linkages” and “leading sectors” and some of his famous parables – like the “tunnel effect” on inequality – have left an enduring mark on economists’ perspectives, his “unbalanced-growth” has been dismissed on ineffectual grounds, while his “empirical lantern” has been derided and abandoned. The lessons of Hirschman’s consultant experience in the tropics have left a legacy that goes beyond his prescriptions: it is a philosophy, a conception of the world, a guiding sets of principles that survives time. From that wilderness where Hirschman led his followers, it is only by re-igniting that lantern that we can wisely contribute to the “development” of others as savvy and informed “experts”.

Review by Beatriz Rodríguez-Satizábal

Since his death in December 2012, Albert Otto Hirschman (1915-2012) his life’s work has been celebrated by specialists and the mass media: a great lateral thinker, an optimist economist and resistance figure, a planner who believed in doubt, and a worldly philosopher are just some of the adjectives used to describe him. But perhaps his contributions to theories of economic development and other useful views to approach economic behaviour have yet to receive all the attention they deserve. Hirschman was an economist who gently trespassed to other disciplines, while consulting for governments in developing countries (i.e. Colombia).Through these efforts he offered a unique strategy for economic development.

His approach began by understanding the conditions of each country, disregarding generalizations based on mathematical, one size fits all, models. Hirschman argued that disequilibria should be encouraged to stimulate growth and to help mobilize resources. Moreover, he noted that developing countries required more than financial capital to implement important economic decisions. He understood economic development as the product of successful habits, which began by observing, identifying and tackling “economic needs”. This view again departed from recipes emanated from general equilibria models. Moreover, he argued that a strategic, opportunistic approach, enabled making the best of what was available in each country. Including elements such as the latent entrepreneurial activity and what government policy could realistically achieve.

Pier Giorgio ArdeniInspired by his own experience studying at Berkeley in the 1980’s and, later, as a development economist in Africa, Pier Giorgio Ardeni’s paper (distributed by NEP HIS on 2014 09 29) explores the potentialities of Hirschman work for future discussions on how to promote development in countries that have had a taste of all the known formulas. The first half of the essay, combines a concise summary of Hirschman’s work with a critical review of Ardeni’s own experience. The second half, discuses broadly the evolution of the development debate and the lessons from Hirschman’s work.

Moving on from Krugman’s (1996) criticism about the lack of a mathematical consistent approach in Hirschman’s work, Ardeni brings together the resilience of Hirschman’s strategy with the reigniting interest in specialized consultancy. As a follower of Hirschman, Ardeni uses his empirical work as an example that working in the field is not overrated for a development economist. Concluding (p. 25) that after more than a half century, Hirschman’s reflections on the role of consultant experts on development assistance are still current.

Hirschman asked more from the so-called consultant experts. On his books, and in Adelman (2013) biography, the emphasis is on their role as educated readers of reality. These ‘readers’ will avoid the use of ‘models’ to later championship the creation of specific strategies, which will include diverse sectors of the society in similarly diverse activities. Nowadays, this assumption is more relevant than ever.

The main challenge in bringing Hirschman back to the scope of twenty-first century development discussions, is to call the attention over the need to witness, being that the one who is invited as a ‘consultant’, what other ways are proliferating around the world. Does each country can reach its potential their own way? As Hirschman recalled, if any country developed in a certain way, that does not mean that other countries will do it the same way. However, each country should be able to learn from the mistakes witnessed in other more advanced countries.

Drawing on Hirschman’s work, Ardeni brings three lessons which are useful today for those planning on economies that have not still reach the highest possible level of development (hopefully, a level measured in their own terms): 1) large-scale development models should be rejected, 2) local and historical conditions matter, 3) the empirical lantern remains very much needed.

Further readings:

  • Adelman, J. (2013) Worldly Philosopher: The Odyssey of Albert O. Hirschman. Princeton, NJ: Princeton University Press.
  • Hirschman, A. (1995) A Propensity to Self-subversion. Cambridge, MA: Harvard University Press.
  • Hirschman, A. (1970) Exit, Voice, and Loyalty Responses to Decline in Firms, Organizations, and States. Cambridge, MA: Harvard University Press.
  • Hirschman, A. (1958) The Strategy of Economic Development. New Haven, Yale University Press.
  • Krugman, P. (1994) ‘The Fall and Rise of Development Economics’, pp. 39-58. In Rodwin, Ll. and Schon, D.L. (eds) Rethinking the Development Experience. Essays Provoked by the Work of Albert O. Hirschman. Washington, D.C.: The Brookings Institution.

Who Will Get the Bill? Lessons from #EconHis on Scottish Independence #indyref

State dissolution, sovereign debt and default: Lessons from the UK and Ireland, 1920-1938


Nathan FOLEY-FISHER (  Federal Reserve Board

Eoin MCLAUGHLIN  ( University of St Andrews


We study Ireland´s inheritance of debt following its secession from the United Kingdom at the beginning of the twentieth century. Exploiting structural differences in bonds guaranteed by the UK and Irish governments, we can identify perceived uncertainty about fiscal responsibility in the aftermath of the sovereign breakup. We document that Ireland´s default on intergovernmental payments was an important event. Although payments from the Irish government ceased, the UK government instructed its Treasury to continue making interest and principal repayments. As a result, the risk premium on the bonds the UK government had guaranteed fell to about zero. Our findings are consistent with persistent ambiguity about fiscal responsibility far-beyond sovereign breakup. We discuss the political and economic forces behind the Irish and UK governments´ decisions, and suggest lessons for modern-day states that are eyeing dissolution. “Further, in view of all the historical circumstances, it is not equitable that the Irish people should be obliged to pay away these moneys” – Eamon De Valera, 12 October 1932 —


Review by Anna Missiaia

The current public debate on the possible secession of Scotland has largely focused on the economic effects for Scotland (as opposed to the rest of the UK). Paul Krugman’s eloquent post “Scots, What the Heck?” warns on the monetary issues that would arise after a victory of the “yes” to Scottish independence on September 18th, while Martin Wolf’s article “What happens after a Yes vote will shock the Scots” explains how Scotland would face years of negotiations and uncertainty before settling down. All of which would come at a cost.  But do all economic consequences of independence really fall exclusively on those who leave?  Economic history can bring some insights on the matter.


The paper by Nathan Foley-Fisher Eoin McLaughlin was circulated by NEP-HIS on 2014-09-05. This research explores how the Irish independence of 1921 was dealt with in terms of public debt inheritance by Ireland.  

After independence and as a result of the negotiations on sovereign debt, the Irish committed to repay land bonds that were previously used to implement a land reform in that country. In 1932 the Irish Government decided to stop interest and principal repayments of these bonds. Ireland effectively defaulted on public debt that it had inherited from the UK. However, the Irish default had no consequences on bondholders because the British Government decided to asume those liabilities and continue with the payments.


Foley-Fisher and McLaughlin looked at the evolution of the spread between Irish land bonds and the “regular” British bonds to assess the reaction of investors. Their methodology was very intuitive and straightforward: it encompassed the identification of structural breaks in the spread series to assess which events affected the risk premium.  The two main breaks correspond to the Anglo-Irish War, during which there was an elevated risk of default by farmers and the second one in 1932, when the possibility of Ireland defaulting on the land bonds started to emerge.  


The estimates of Foley-Fisher and McLaughlin suggest that that the increased spread (originated by both breaks) remained “high” long after independence and in spite of the formal commitments by both the Irish (to repay) and British (to guarantee payments). Following the Irish default, the spread return to zero once the UK Government started to repay bondholder.

The authors identify several reasons why the British Government decided to back the Irish rather than pass the burden of the default on to the bondholders. These reasons included the relatively contained cost for the UK Treasure, the fact that most bondholders were based in the UK and the fear by the UK to be accused of a lack of commitment. Therefore, the cost of the default was greater for the British. Foley-Fisher and McLaughlin also point out that the willingness by the British to take up such a burden depended on the particular situation between Ireland and the UK. In other cases, such as the default of Newfoundland in 1932, the British government was happy to let its former colony default as the consequences of this default was low or negligible for British bondholders.


In summary, the paper by Foley-Fisher and McLaughlin goes straight on to the point, is well organised and engaging. With a fairly simple empirical strategy they show insights that are easily read by economic historians but also those who are now commenting the Scottish referendum. The “take home” message from this history is the following: after independence, a risk premium on inherited public debt has to be paid and this risk premium can be requested by investors for many years after secession. The Treasury of the former union might (or not) decide to guarantee all the former debt in case the new independent state decides to default. However, the choice of doing so depends on many factors, and these factors are not all foreseen. In the words of Martin Wolf: “however amicably a divorce begins, that is rarely how it ends” and the wealthy abandoned spouse might decide to guarantee for the debts of its other half. Or not.


Soltaire courtesy of Chilanga Cement

On Macroeconomics After the Financial Crisis

Short-Run Macro After the Crisis: The End of the “New” Neoclassical Synthesis?

By Oliver Landmann (Albert-Ludwigs-University Freiburg)

Abstract: The Financial Crisis of 2008, and the Great Recession in its wake, have shaken up macroeconomics. The paradigm of the “New” Neoclassical Synthesis, which seemed to provide a robust framework of analysis for short‐run macro not long ago, fails to capture key elements of the recent crisis. This paper reviews the current reappraisal of the paradigm in the light of the history of macroeconomic thought. Twice in the past 80 years, a major macroeconomic crisis led to the breakthrough of a new paradigm that was to capture the imagination of an entire generation of macroeconomists. This time is different. Whereas the pre‐crisis consensus in the profession is broken, a sweeping transition to a single new paradigm is not in sight. Instead, macroeconomics is in the process of loosening the methodological straightjacket of the “New” Neoclassical Synthesis, thereby opening a door for a return to its original purpose: the study of information and coordination in a market economy.

Persistent Link:

Reviewed by Catherine Dorman (final-year BSc Business Economics student, Bangor University, Wales)


This paper was distributed by NEP-HIS on 2014-02-08, and it addresses the impact that the recent financial crisis has had upon macroeconomic thought. Specifically in terms of how the New Neoclassical Synthesis has held up to scrutiny following the most recent economic debacle. Landmann offers an overview of the history and progression of macroeconomic thought from the “Keynesian revolution” (p.4) to New Neoclassical Synthesis economics, right up to modern day contemporary economics, and its response to current macroeconomic issues.

The purpose of Landmann’s paper is to explain how economics has evolved since the Keynesian school of thought emerged in the aftermath of the 1930s depression, and to show how the macroeconomic community has been left splintered as a result of the recent financial crisis, without a consensus in sight. It asks the questions: Why has this occurred? How did the New Neoclassical Synthesis fail to foresee or explain the worst economic downturn since the 1930s? Finally, it asks the all-important question: Is it necessarily a bad situation to be in? Or has having smashed the previous concept to pieces resulted in an environment in which macroeconomics can really explore and develop itself without the shackles of archaic and contextually inapplicable economic theory?

Prof. Dr. Oliver Landmann -Bild Schneider

Landmann introduces his paper by assessing the state of macroeconomic affairs, operating within a New Neoclassical Synthesis environment, in the run up to the financial crisis of 2008. The ‘Great Moderation’, described a period of economic constancy spanning from the 1980s to 2008, which was characterized by a continually stable business cycle (Davies and Kahn, 2008). Famously, Ben Bernanke, who coined the phrase ‘Great Moderation’, is quoted as having attributed this period of economic success to structural change, improved macroeconomic policies, and good luck (Bernanke, 2004). Ultimately, Landmann describes a period in which the great moderation had lulled the economic community into a false sense of stability, much like that described by Hymen Minsky (Minsky, 1992).

The next section of the paper is dedicated to creating a contextual understanding, and this is achieved through showing the evolution of economics thought from Keynes to the New Neoclassical Synthesis.

Consider Fig 1 for a brief overview of the changes of economic thought from the 1930s to 2008:

Fig. 1

As is evident across each of these theories, their explanatory power tends to be relatively finite. In the case of Adam Smith and John Keynes’ theories, they were deconstructed and meshed in order to explain the economy’s operations at a specific point in time, and this came to be known as the Neoclassical Synthesis. This was largely credited to the work of Paul Samuelson during the 1950s (Samuelson, 1955). It took the underlying idea of Keynesian theory of underemployment, with the notion that monetary and fiscal policy can be employed to reduce this. It could therefore use classical equilibrium analysis to explain resource allocation and relative prices (p4). The economic policy was successfully adopted in developed countries as an effective treatment for the economy after the Second World War.

It was from the stability and growth that was created through the adoption of this macroeconomic approach, which helped to develop confidence in the prescriptive capabilities of economic theory. However, as history has taught us, ceteris paribus does not hold in reality. The theory was largely nullified in the 60’s and 70’s, because it had been unable to predict stagflation, and the Philips Curve was completely undermined (Motyovszki, 2013).
Consider Fig 2 for a concise history of the economic theory covered in this paper.

Fig. 2
Figure 2
(Source: Short-Run Macro After the Crisis: The End of the “New” Neoclassical Synthesis? By Oliver Landmann.)

The result of this was a new hybrid economic theory: New Classical economics. From this theory came the Real Business Cycle model, which argued that cycles result from the reactions of optimizing agents to real disturbances, for example, changes in technology.
In the 1970s, the New Neoclassical Synthesis emerged, with a combination of New Keynesianism and New Classical theories, and the basis of economic practice during the Great Moderation. It was felt amongst policy makers that the short term interest rate was enough of an instrument in economic management, and that the business cycle was believed to have been overcome (Aubrey, 2013).

Landmann’s paper addresses how the economic crash of 2008 threw macroeconomics into turmoil. The New Neoclassical Synthesis had not fully appreciated the effects of the financial market within its model, and the result was that it was inadequate as a means of remedying problems in the economy (Pike, 2012). Landmann makes a good point of acknowledging that although financial economics took great consideration of the behavioural antics of the banking sector, within the actual practiced model of the New Neoclassical Synthesis, these were fundamentally disconnected.

In light of this, the once unquestioned macroeconomic doctrine was suddenly under scrutiny. One of the greatest criticisms of the New Neoclassical Synthesis is its reliance upon “elegant” (p12) mathematical equations, which are often predictively insufficient due to the sheer number of assumptions that have to be made in order to create a working model. It doesn’t fully estimate factors such as irrationality and uncertainty (BBC NEWS, 2014) and the result of this is that the results can be wildly inaccurate (Caballero, 2010). This can also create coordination problems from assumptive behavioural models, such as the Robinson Crusoe model, which become overly stylized to the detriment of economic viability (Colander, 2009).

Consequentially, macroeconomics has begun to pay more focus to realistic behaviour, given that information is rarely perfect in actuality (Caballero, 2010; Sen, 1977).

Landmann concludes that out of the financial crisis, there has been a flood of new macroeconomic theories develop, and that the New Neoclassical Synthesis still has pedagogic merit. He does, however, primarily blame the era of Great Moderation for a period of complacency amongst economic academics. The simple acceptance of one concept of economics based purely on its merit during a stable business cycle, without inquisitive forethought into how it would respond when faced with an exogenous or endogenous shock, is Landmann’s greatest criticism.


This paper is incredibly relevant, and its themes and messages are certainly ones that economists need to be considering in the aftermath of such a fresh and colossal economic recession. There is perhaps an over simplification of some of the timeline of economics: broadly defining all economists during the Great Moderation as being one school of thought is unfair and inaccurate, but for the purpose of the paper, it is perhaps forgivable.

Landmann makes little mention of the pattern by which economic thought often evolves. Gul, Chaudhry and Faridi describe economic thought as developing from “quick fixes” (Gul et al. 2014: 11), and this would help to explain why, during the Great Moderation, very little new economic thought was developed: the need wasn’t there. Through their histories of economic development, Gul et al. (2014) and Landmann,suggest that macroeconomics is reactionary as opposed to precautionary, despite its attempts to be prophetic.

This echoes the “Lucas Critique”, the understanding that economic equations developed and implemented during one policy system, are unlikely to remain relevant or explanatorily applicable during another (Lucas, 1976).

Finally, it does little to explore the external factors that led to the period of Great Moderation. Globalisation had really taken a hold during this time, with containerization in full flow (at 90% of all non-bulk cargo worldwide being moved by containers on transport ships (C. E. Ebeling, 2009)), and advances in computation and communication technology (Bernanke, 2004) which helped to stabilize inventory stocks – something that is acknowledged as a contributory factor in cyclical fluctuations (McConnel and Quiros, 2000).

Ultimately, the paper makes the same conclusions that most macroeconomic papers do. There is no definitive explanation for everything that occurs within the economy, and certainly no blanket approach that will procure the most lucrative outcomes on every occasion. This paper goes a step further to explain why it can be damaging to rigidly subscribe to one theory of macroeconomics: it discourages continual change and forethought, which in turn can stunt the evolution of explanatory macroeconomic thought.


Aubrey, T., 2013. Profiting from Monetary Policy: Investing Through the Business Cycle. 1 ed. New York: Palgrave MacMillan.

BBC NEWS, 2014. Did Hyman Minsky find the secret behind financial crashes?. Available at: [Accessed 07 April 2014].

Bernanke, B. S., 2004. Remarks by Governor Ben S. Bernanke At the Meeting of the Eastern Economics Association Available at: [Accessed 07 April 2014]

Ebeling, C. E. 2009. Evolution of a Box. Invention and Technology 23(4): 8-9.

Caballero, R. J., 2010. Macroeconomics After the Crisis: Time to Deal with the Pretense-of-Knowledge Syndrome. Journal of Economic Perspectives 24(4): 85-102.

Colander, D. C. et al., 2009. The Financial Crisis and the Systemic Failure of Academic. Kiel: Kiel Institute for the World Economy.

Davies, S. J., and Kahn, J.A., 2008. Interpreting the Great Moderation: Changes in the Volatility of Economic Activity at the Macro and Micro Levels. Cambridge, MA: National Bureau of Economic Research. Available at: [Accessed 07 April 2014]

Gul, E., Chaudhry, I. S. and Faridi, M. Z., 2014. The Classical-Keynesian Paradigm: Policy Debate in Contemporary Era. Munich: Munich Personal RePEc Archive. Available at: [Accessed 07 April 2014]

Lucas, R. E., 1976. Econometric Policy Evaluation: A Critique. Carnegie‐Rochester, Carnegie‐Rochester Conference.

McCombie, J. S. L., and Pike, M., 2012. The End of the Consensus in Macroeconomic Theory? A Methodological Inquiry. Unpublished. Cambridge Centre for Economic and Public Policy WP02-12, Department of Land Economy: University of Cambridge. Available at: [Accessed 07 April 2014]

McConnell, M. M., and Perez Quiros, G., 2000. Output Fluctuations in the United States: What Has Changed Since the Early 1980s?. Federal Reserve Bank of San Francisco. Available at: [Accessed 07 April 2014]

Minsky, H. P., 1992. The Financial Instability Hypothesis. New York: The Jerome Levy Economics Institute of Bard College.

Motyovszki, G., 2013. The Evolution of the Phillips Curve Concepts and Their Implications for Economic Policy. Budapest: Central European University.

Samuelson, P., 1955. Economics. 3rd ed. New York: McGraw-Hill.

Sen, A. K., 1977. Rational Fools: A Critique of the Behavioral Foundations of Economic Theory. Philosophy and Public Affairs. 6(4): 317-344.