Category Archives: USA

Beggar-thy-neighbouring-drinker: Effects of Prohibition on American Infant Mortality in the 1930s

Infant Mortality and the Repeal of Federal Prohibition

By: David S. Jacks (Simon Fraser University), Krishna Pendakur (Simon Fraser University), and Hitoshi Shigeoka (Simon Fraser University).

Abstract: Exploiting a newly constructed dataset on county-level variation in Prohibition status from 1933 to 1939, this paper asks two questions: what were the effects of the repeal of federal prohibition on infant mortality? And were there any significant externalities from the individual policy choices of counties and states on their neighbors? We find that dry counties with at least one wet neighbor saw baseline infant mortality increase by roughly 3% while wet counties themselves saw baseline infant mortality increase by roughly 2%. Cumulating across the six years from 1934 to 1939, our results indicate an excess of 13,665 infant deaths that could be attributable to the repeal of federal Prohibition in 1933.

URL: http://www.nber.org/papers/w23372

Distributed by NEP-HIS on: 2017-05-21

Review by: Gregori Galofré-Vilà (University of Bocconi and University of Oxford)

In 1919, the National Prohibition Act (also known as Volstead Act), which passed with the support of American rural protestants and social progressives, mandated that “no person shall manufacture, sell, barter, transport, import, export, deliver, furnish or possess any intoxicating liquor.” The 1920s became the decade when Al Capone operated in the Canadian and Mexican borders smuggling alcohol with the well-known subsequent boost to organized crime.  President Roosevelt lifted Prohibition in 1933, although its rejection was through local referendums or elections. The repeal of Prohibition in some parts of the country divided the US into ‘dry’ and ‘wet’ areas. In dry areas, people either abstained, or were forced to buy alcohol sometimes from toxic homebrews of methanol at illegal underground bars or from ‘wet’ neighbouring counties. Meanwhile, in ‘wet’ areas, the party was on! Interestingly enough, the end of the Prohibition created what epidemiologists call ‘a natural experiment’. These experiments arise from historical events that affect some people, communities or societies, but not others. This divergence offers the possibility of learning how political choices ultimately affect people’s lives, for better or for worse.

Figure 1 ok

To explore the health impacts of the repeal of the National Prohibition Act, Jacks, Pendakur and Shigeoka (2017) created a newly county-level dataset on variations in prohibition status from 1933 to 1939, and related it to previous data on infant mortality from Fishback et al. (2011) and to additional controls (such as retail sales, New Deal spending, urbanisation and so on). They addressed two questions: (1) what were the effects of the repeal of federal Prohibition on infant mortality?; and (2) were there any significant externalities from the individual policy choices of counties and states on their neighbours? In relation to the first question, they found that the effects were quite small: from 1934 until 1939, there was an excess of 13,665 infant deaths (or 1.2 additional deaths per 1,000 live births) that could be attributed to the repeal of the Prohibition in 1933. Indeed, Fishback found that the effects of the New Deal or climatic variations had greater impact on infant mortality (Fishback 2007; 2011). As for the second question, their results indicated that cross-border policy externalities were likely to be important, and that the impact of the prohibition status of individual county on infant mortality was driven by the prohibition status of its neighbours, with higher results on infant mortality for dry counties bordering with wet neighbours.

A very interesting feature of the paper is the methodological approach used in order to recognise the possibility of policy externalities across county borders. Due to spillovers and the open economy, it was not only the county’s choice (the county’s status with regards to prohibition), but, indeed, the prohibition status of its neighbours. Hence, they distinguished among counties that allow the sale of alcohol within their borders (‘wet’ counties), ‘dry’ countries with also ‘dry’ neighbours, and ‘dry’ counties next to a wet neighbours (‘dryish’ counties). In addition to several robustness tests, I particularly like the border-pair discontinuity design considering neighbouring county-pairs. This approach follows a modification of the methodology developed by Dube et al. (2010). The idea is that given the social and economic similarities between neighbouring counties, these are likely to be a good suitable control group as they share common, but unobserved county characteristics with the treatment group. In other words, in this identification strategy, the prohibition status of counties within a county-pair is uncorrelated with the differences in residual infant mortality in either county. This strategy, in turn, gets rid of the need for instrumental variables to limit biases imparted by unobserved or unmeasured confounders correlated with Prohibition.

Figure 2

While this is a really interesting paper, given the small effects, it is possible that the hypothesised causal mechanism between Prohibition, maternal alcohol consumption during pregnancy (from which no data exist) and infant mortality does not fully capture the effects of the Prohibition on health. If that is the case, the selection of infant mortality data is likely to be underestimating the causal effect of Prohibition on health. For example, in The Body Economic, Stuckler and Basu (2013) argued that during the Great Depression the states with the most stringent Prohibition campaigns lowered adult drinking related deaths by around 20% and also diminished suicides rates substantially. Yet, the fact that Jacks et al. (2017) have been able to find effects of the Prohibition on infant mortality highlights the relevance of the Prohibition on health and warrants further research, a research nested into the wider literature of the Great Depression and the New Deal.

References

Dube, A., T.W. Lester, and M. Reich (2010), “Minimum Wage Effects Across State Borders.” Review of Economics and Statistics 92(4), 945-964.

Fishback, P.V., M.R. Haines, and S. Kantor (2007), “Births, Deaths, and New Deal Relief during the Great Depression.” Review of Economics and Statistics 89(1), 1-14.

Fishback, P.V., W. Troesken, T. Kollmann, M. Haines, P. Rhode, and M. Thomasson (2011), “Information and the Impact of Climate and Weather on Mortality Rates During the Great Depression.” In The Economics of Climate Change (Eds G. Libecap and R. Steckel). Chicago: University of Chicago Press, 131-168

Jacks, D.S., K. Pendakur, and H. Shigeoka (2017), “Infant Mortality and the Repeal of Federal Prohibition.” NBER Working Paper No. 23372

Stuckler, D. and S. Basu (2015) The Body Economic: Why Austerity Kills. Basic Books.

{Economics ∪ History} ∩ {North ∪ Fogel}

A Cliometric Counterfactual: What if There Had Been Neither Fogel nor North?

Claude Diebolt (Strasbourg University) and Michael Haupert (University of Wisconsin – La Crosse)

Abstract – 1993 Nobel laureates Robert Fogel and Douglass North were pioneers in the “new” economic history, or cliometrics. Their impact on the economic history discipline is great, though not without its critics. In this essay, we use both the “old” narrative form of economic history, and the “new” cliometric form, to analyze the impact each had on the evolution of economic history.

URL: http://d.repec.org/n?u=RePEc:afc:wpaper:05-17&r=his

Circulated by nep-his on: 2017-02-19

Revised by Thales Zamberlan Pereira (São Paulo)

Douglass North and Robert Fogel’s contribution to the rise of the “new” economic history is well known, but Diebolt and Haupert’s paper adds a quantitative twist to their roles as active supporters of cliometrics when there was still resistance to apply new methods to the study of the past. Economic theory and formal modeling marked the division between the “old” and the “new” economic historians in the 1960s, and Diebolt and Haupert use two metrics to track the transformation in the field: 1) the increased use of graphs, tables, and especially equations during North’s period as editor (along with William Parker) of the Journal of Economic History between 1961 and 1966; 2) the citation of Fogel’s railroad work, to measure the impact of his innovations in economic history methodology.

Before showing their results about the positive influence of North and Fogel on quantitative economic history, the authors present a brief history of cliometrics, beginning with the 1957 meeting of the Economic History Association (EHA). It was there that Alfred Conrad and John Meyer presented their two foundational papers, about the use of economic theory and statistical inference in economic history, and the economics of slavery in the antebellum South. From that meeting, William Parker edited what was probably the first book (released in 1960) of the cliometric movement.

It was during the 1960s, however, that larger changes would occur. First, Parker and North were appointed editors of the Journal of Economic History (JEH) in 1961 and began to promote papers that used more economic theory and mathematical modelling. Their impact appears in Figures 2 and 3, which show a measure of “equations per page” and “graphs, tables, and equations per page” in the JEH since its first issue in 1941.

Diebolt -fig2

Diebolt -fig3

As a way stay true to the spirit of the discussion, Diebolt and Haupert test the hypothesis if the period between 1961 and 1966 had an enduring effect in the increase of “math” in the JEH. Despite a noticeable increase in the North and Parker years, it was only in 1970 that a significant “level shift” occurs in the series, and Diebolt and Haupert argue that this could be interpret as a lag effect from the 1961-1966 period. Their finding that 1970 marks a shift in the methodology of papers published in the JEH is consistent with the overall use of the word cliometrics in other publications, as a NGRAM search shows.

https://books.google.com/ngrams/interactive_chart?content=cliometrics&year_start=1930&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Ccliometrics%3B%2Cc0

In addition to the editorial impact of Douglass North in the JEH, the second wave of change in economic history during the 1960s was Robert Fogel. In 1962, Fogel published his paper about the impact of railroads in American economic growth. The conclusion that railroads were not essential to America, along with the use of counterfactuals to arrive at that result, “attracted the attention of the young and the anger of the old” economic historians (McCloskey, 1985, p. 2). Leaving the long debate about counterfactuals aside, what Fogel’s work showed was that the economics methodology at the time was useful to overcome the limitations of interpreting history based only on what historical documents offered at face value.

Diebolt and Haupert’s paper, therefore, shows that cliometric research in the JEH had a positive exogenous shock with North as an editor, with Fogel supplying the demand brought by the new editorial guidelines. However, there is a complementary narrative about these developments that deserves to be mentioned. Many innovations in methodology brought to the field after 1960 came from researchers who were primarily concerned with economic growth, not only with historical events. This idea appears in the paper, when the authors argue that during his post-graduate studies, the starting point of Fogel’s research was about the “large processes of economic growth” (p.8). In addition, the realization that Fogel’s training program “was unorthodox for an economic historian” is also indicative that, in the 1960s, with computational power and new databases that extended to the 19th century, history was the perfect case study to test economic theory.

This exogenous impact in the field, with clear beneficial results, is similar to the role Daron Acemoglu and his many authors had in reviving economic history in the last decade to a broader audience. Acemoglu initial focus when he presented a different way to do research in economic history was in the present (i.e. long-run growth), not the past. It seems, therefore, that the use of mathematical models in economic history was not a paradigm shift in the study of history, but rather it followed the change from what was considered “being an economist” in the United States. After 1945, Samuelson’s Foundations of Economic Analysis set the standard for the type of training that econ students received, turning mathematical models as the dominant method in economics (Fourcade, 2009, p. 84). Cliometrics, by following this trend, created an additional way to do research in economic history.

https://books.google.com/ngrams/interactive_chart?content=Economic+models&year_start=1930&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2CEconomic%20models%3B%2Cc0

One comparative advantage of the new economic historians, in addition to the “modern” training in economics, was something that can be called the Simon Kuznets effect. Both North and Fogel worked with Kuznets, and the development of macroeconomic historical databases at the NBER after the 1930s provided the ground to apply new methodologies to understand economic growth. In the first edition of the Journal of Economic History Kuznets already advocated the use of statistical analysis in the study of history (Kuznets, 1941). But the increase in popularity of models and statistics in economic history, especially in the 1970s (see Temin, 2013), seems to be related to its impact to understand the broader questions of economics. One notable example comes with Milton Friedman and Anna Schwartz’s monetary history of the United States, published in 1966. Friedman worked with Kuznets in the 1930s, and the book is the typical research in economic history with a focus on “contemporary” issues.

As Diebolt and Haupert claim, North and Fogel contribution is undeniable, but what about the contrafactual they propose in the title? Just as no single innovation was vital for economic growth, probably no economic historian was a necessary condition for cliometrics. Without North and Fogel, maybe the old economic historians would have had another decade, but by the 1970s the JEH would be under new management.

References

  • Fourcade, M. (2009) Economists and Societies: Discipline and Profession in the United States, Britain, and France, 1890s to 1990s. Princeton, NJ: Princeton University Press.
  • Kuznets, S. (1941) ‘Statistics and Economic History’, The Journal of Economic History, 1(1), pp. 26–41.
  • McCloskey, D. N. (1985) ‘The Problem of Audience in Historical Economics: Rhetorical Thoughts on a Text by Robert Fogel’, History and Theory, 24(1), pp. 1–22. doi: 10.2307/2504940.
  • Temin, P. (2013) The Rise and Fall of Economic History at MIT. Working Paper 13–11. Boston, MA: MIT. Available at: https://papers.ssrn.com/abstract=2274908 (Accessed: 29 May 2017).

Is the Glass Half Full?: Positivist Views on American Consumption

Fifty Years of Growth in American Consumption, Income, and Wages

By Bruce Sacerdote (Darmouth)

Abstract: Despite the large increase in U.S. income inequality, consumption for families at the 25th and 50th percentiles of income has grown steadily over the time period 1960-2015. The number of cars per household with below median income has doubled since 1980 and the number of bedrooms per household has grown 10 percent despite decreases in household size. The finding of zero growth in American real wages since the 1970s is driven in part by the choice of the CPI-U as the price deflator; small biases in any price deflator compound over long periods of time. Using a different deflator such as the Personal Consumption Expenditures index (PCE) yields modest growth in real wages and in median household incomes throughout the time period. Accounting for the Hamilton (1998) and Costa (2001) estimates of CPI bias yields estimated wage growth of 1 percent per year during 1975-2015. Meaningful growth in consumption for below median income families has occurred even in a prolonged period of increasing income inequality, increasing consumption inequality and a decreasing share of national income accruing to labor.

URL: http://EconPapers.repec.org/RePEc:nbr:nberwo:23292
(Click here for a free download version)

Distributed by NEP-HIS on:2017-04-23

Revised by: Stefano Tijerina (Maine)

Contrary to the popular outcry that the gap between rich and poor in the United States has steadily increased since the 1960s and that the quality of life has steadily deteriorated, Bruce Sacerdote argues that the picture is not as grim and that the steady rise of household consumption for households “with below median income” is evidence that the national economy has continued to thrive for all U.S. citizens and not just those on the top.[1] In “Fifty Years of Growth in American Consumption, Income, and Wages” Sacerdote reveals that the focus on wage growth favored by economists and policy makers impedes us from focusing on other aspects of growth, such as consumption and the quality of consumed goods.[2] From his perspective focusing on real wage growth and the inflated rates of the Consumer Price Index (CPI) only tells half of the story and that it is therefore necessary to center on consumption data in order to construct a more holistic picture of the economic realities of the below median income household.[3] From his perspective, “low income families have seen important gains in at least some areas of consumption” thanks in part to a steady growth in consumption of 1.7 percent per year since 1960.[4]

Bruce Sacerdote adjusted the CPI to the bias corrections developed by Dora Costa and Bruce Hamilton who previously worked on similar questions, looking at “the true costs of living” and new ways of estimating “real incomes” in the United States.[5] His findings for the period between 1960 to 2015 concluded that there was an increase of 164 percent in consumption for those below the median household income.[6] A previous consumption measure for the same period of time, excluding the bias measures from Costa and Hamilton, showed a 62 percent increase in consumption.[7] A third measurement that calculated real wages using the Federal Reserve’s Personal Consumption Expenditures (PCE) for the same period of time reversed the claims of wage stagnation furthered by some economists, policy makers, citizens, and labor union advocacy groups. This last measurement showed that when using the PCE to deflate nominal wages, the growth of real wages was 0.54 percent per year.[8] This contradicts the arguments of data sets such as the “2016 Distressed Community Index” that focus specifically on the increasing gap between rich and poor in the United States.[9]

Beside the bias corrections and other measurements, Sacerdote argues that the quality, technology, and durability of current consumption goods is superior to that of previous decades, therefore expanding the relative capacity of consumption of those below the median income. For example he claims that “the number of cars per household has risen from 1 to 1.6 during 1970-2015,” while the median home square footage for this income segment has risen about 8 percent during this same period of time.[10]

His objective of focusing “on growth rates in consumption instead of changes in poverty rates” is achieved by using data and methodologies for analyzing data that shows that “the glass half full” but as it is evident from the working paper, quantitative data can be tailored to fit the researcher’s agenda. Numerous questions surface regarding consumption trends in the United States that lead to further conclusions that indicate that the 164 percent increase of the past fifty-plus years is the result of greater household debt and cheaper consumer goods prices that are tied to the impacts of globalization. Consumer households that fall below the median income continue to steadily consume more, there is not doubt about that, but their wages continue to depreciate while their debt continues to rise. Moreover, globalization has allowed companies to transfer their production overseas, leading to a loss of jobs in the manufacturing sector that potentially offered higher than minimum wage salaries to those households that ranked below the median income. The transfer of production has at the same time guaranteed cheaper products to these consumers that then are able to consume more with their lower wages and their greater access to loans that artificially maintain their consumption capacity while increasing their debt to income ratio.

According to the U.S. Census Bureau, the median household income for the year 2014 was $53,719.[11] This means that half of Americans earned less than that amount. This population, that represents the central focus of Sacerdote’s research, currently has an average household debt of $130,000 (assuming that those earning below the median income are forced to go into debt to maintain their standard of living).[12] The breakdown of this debt shows that mortgages, credit cards, auto loans, and student loans make up most of the American debt.[13] This could indicate that the steady consumption increase demonstrated by Sacerdote could actually be artificially maintained by the financial system that keeps the American consumer afloat.

Sacerdote’s work could also benefit from qualitative research that would provide more in-depth analysis and at the same time counter-balance his claims on consumer choice and the reliability of products being consumed. Qualitative research could provide a different explanation as to why low-income consumers have opted to hold on to their vehicles for longer periods of time, how they are able to purchase expensive technology such as cell phones and access services such as internet and cable television, if indoor plumbing is a sign of a higher quality of life or simply a response to policy and the standardization of construction norms, and if the increase in housing square footage per household really represents a higher quality of life.  

Selectivity of data and research approach in this case clearly benefits the researcher’s argument but this could quickly be turned around with other sets of data and a different research approach. A focus on credit rates and debt rates over the same period of time shifts the argument around and leads to completely different conclusions, and so would a qualitative analysis of the quality of life of Americans. Although controversial, Sacerdote’s work forces the reader to think more critically about the changes that have taken place in American society in the past fifty-plus years and brings up the question of whether or not this consumption approach is more reflective of the nation’s economic dependence on consumer consumption as a percentage of the GDP.

References

[1] See for example Thomas Piketty’s argument on the increasing gap between rich and poor and the possible threat to capitalism and democratic stability in “Capital in the 21st Century.” Cambridge: Harvard University (2014).

[2] Bruce Sacerdote. “Fifty Years of Growth in American Consumption, Income, and Wages.” National Bureau of Economic Research, working paper series, working paper 23292, March 2017. Accessed April 25, 2017. http://nber.org/papers/w23292, 2.

[3] Ibid.

[4] Ibid., 1-7.

[5] See Dora L. Costa. “Estimating Real Income in the United States from 1888 to 1994: Correcting CPI Bias Using Engel Curves.” Journal of Political Economy 109, no. 6 (2001): 1288-1310, and Bruce W. Hamilton. “The True Cost of Living: 1974-1991.” Working paper in Economics, The John Hopkins University Department of Economics, January 1998.

[6] Sacerdote. “Fifty Years of Growth in American Consumption, Income, and Wages,” 2.

[7] Ibid., 1.

[8] Ibid., 3.

[9] “2016 Distressed Community Index: A Analysis of Community Well-Being Across the United State.” Accessed April 25, 2017. http://eig.org/dci/report. See also for example Gillian B. White. “Inequality Between America’s Rich and Poor is at a 30-Year High.” Washington Post, December 18, 2014. Accessed May 1, 2017. https://www.theatlantic.com/business/archive/2014/12/inequality-between-americas-rich-and-americas-poor-at-30-year-high/383866/.

[10] Sacerdote. “Fifty Years of Growth in American Consumption, Income, and Wages,” 2.

[11] Matthew Frankel. “Here’s the Average American Household Income: How do you Compare?” USA Today November 24, 2016. Accessed May 2, 2017. https://www.usatoday.com/story/money/personalfinance/2016/11/24/average-american-household-income/93002252/

[12] Matthew Frankel. “The Average American Household Owes 90,336 – How do you Compare?” The Motley Fool May 8, 2016. Accessed May 10, 2017. https://www.fool.com/retirement/general/2016/05/08/the-average-american-household-owes-90336-how-do-y.aspx

[13] Ibid.

A New Take on Sovereign Debt and Gunboat Diplomacy

Going multilateral? Financial Markets’ Access and the League of Nations Loans, 1923-8

By

Juan Flores (The Paul Bairoch Institute of Economic History, University of Geneva) and
Yann Decorzant (Centre Régional d’Etudes des Populations Alpines)

Abstract: Why are international financial institutions important? This article reassesses the role of the loans issued with the support of the League of Nations. These long-term loans constituted the financial basis of the League’s strategy to restore the productive basis of countries in central and eastern Europe in the aftermath of the First World War. In this article, it is argued that the League’s loans accomplished the task for which they were conceived because they allowed countries in financial distress to access capital markets. The League adopted an innovative system of funds management and monitoring that ensured the compliance of borrowing countries with its programmes. Empirical evidence is provided to show that financial markets had a positive view of the League’s role as an external, multilateral agent, solving the credibility problem of borrowing countries and allowing them to engage in economic and institutional reforms. This success was achieved despite the League’s own lack of lending resources. It is also demonstrated that this multilateral solution performed better than the bilateral arrangements adopted by other governments in eastern Europe because of its lower borrowing and transaction costs.

Source: The Economic History Review (2016), 69:2, pp. 653–678

Review by Vincent Bignon (Banque de France, France)

Flores and Decorzant’s paper deals with the achievements of the League of Nations in helping some central and Eastern European sovereign states to secure market access during in the Interwar years. Its success is assessed by measuring the financial performance of the loans of those countries and is compared with the performance of the loans issued by a control group made of countries of the same region that did not received the League’s support. The comparison of the yield at issue and fees paid to issuing banks allows the authors to conclude that the League of Nations did a very good job in helping those countries, hence the suggestion in the title to go multilateral.

The authors argue that the loans sponsored by the League of Nation – League’s loan thereafter – solved a commitment issue for borrowing governments, which consisted in the non-credibility when trying to signal their willingness to repay. The authors mention that the League brought financial expertise related to the planning of the loan issuance and in the negotiations of the clauses of contracts, suggesting that those countries lacked the human capital in their Treasuries and central banks. They also describe that the League support went with a monitoring of the stabilization program by a special League envoy.

Unknown

Empirical results show that League loans led to a reduction of countries’ risk premium, thus allowing relaxing the borrowing constraint, and sometimes reduced quantity rationing for countries that were unable to issue directly through prestigious private bankers. Yet the interests rates of League loans were much higher than those of comparable US bond of the same rating, suggesting that the League did not create a free lunch.

Besides those important points, the paper is important by dealing with a major post war macro financial management issue: the organization of sovereign loans issuance to failed states since their technical administrative apparatus were too impoverished by the war to be able to provide basic peacetime functions such as a stable exchange rate, a fiscal policy with able tax collection. Comparison is made of the League’s loans with those of the IMF, but the situation also echoes the unilateral post WW 2 US Marshall plan. The paper does not study whether the League succeeded in channeling some other private funds to those countries on top of the proceeds of the League loans and does not study how the funds were used to stabilize the situation.

InterWar-League-Of-Nations-USA-Cartoons-Punch-Magazine-1919-12-10-483

The paper belongs to the recent economic history tradition that aims at deciphering the explanations for sovereign debt repayment away from the gunboat diplomacy explanation, to which Juan Flores had previously contributed together with Marc Flandreau. It is also inspired by the issue of institutional fixes used to signal and enforce credible commitment, suggesting that multilateral foreign fixes solved this problem. This detailed study of financial conditions of League loans adds stimulating knowledge to our knowledge of post WW1 stabilization plans, adding on Sargent (1984) and Santaella (1993). It’s also a very nice complement to the couple of papers on multilateral lending to sovereign states by Tunker and Esteves (2016a, 2016b) that deal with 19th century style multilateralism, when the main European powers guaranteed loans to help a few states secured market access, but without any founding of an international organization.

But the main contribution of the paper, somewhat clouded by the comparison with the IMF, is to lead to a questioning of the functions fulfilled by the League of Nations in the Interwar political system. This bigger issue surfaced at two critical moments. First in the choice of the control group that focus on the sole Central and Eastern European countries, but does not include Germany and France despite that they both received external funding to stabilize their financial situation at the exact moment of the League’s loans. This brings a second issue, one of self-selection of countries into the League’s loans program. Indeed, Germany and France chose to not participate to the League’s scheme despite the fact that they both needed a similar type of funding to stabilize their macro situation. The fact that they did not apply for financial assistance means either that they have the qualified staff and the state apparatus to signal their commitment to repay, or that the League’s loan came with too harsh a monitoring and external constraint on financial policy. It is as if the conditions attached with League’ loans self-selected the good-enough failed states (new states created out of the demise of the Austro-Hungarian Empire) but discouraged more powerful states to apply to the League’ assistance.

Unknown

Now if one reminds that the promise of the League of Nations was the preservation of peace, the success of the League loans issuance was meager compared to the failure in preserving Europe from a second major war. This of course echoes the previous research of Juan Flores with Marc Flandreau on the role of financial market microstructure in keeping the world in peace during the 19th century. By comparison, the League of Nations failed. Yet a successful League, which would have emulated Rothschild’s 19th century role in peace-keeping would have designed a scheme in which all states in need -France and Germany included – would have borrowed through it.

This leads to wonder the function assigned by their political brokers to the program of financial assistance of the League. As the IMF, the League was only able to design a scheme attractive to the sole countries that had no allies ready or strong-enough to help them secure market access. Also why did the UK and the US chose to channel funds through the League rather than directly? Clearly they needed the League as a delegated agent. Does that means that the League was another form of money doctors or that it acts as a coalition of powerful countries made of those too weak to lend and those rich but without enforcement power? This interpretation is consistent with the authors’ view “the League (…) provided arbitration functions in case of disputes.”

In sum the paper opens new connections with the political science literature on important historical issues dealing with the design of international organization able to provide public goods such as peace and not just helping the (strategic) failed states.

References

Esteves, R. and Tuner, C. (2016a) “Feeling the blues. Moral hazard and debt dilution in eurobonds before 1914”, Journal of International Money and Finance 65, pp. 46-68.

Esteves, R. and Tuner, C. (2016b) “Eurobonds past and present: A comparative review on debt mutualization in Europe”, Review of Law & Economics (forthcoming).

Flandreau, M. and Flores, J. (2012) “The peaceful conspiracy: Bond markets and international relations during the Pax Britannica”, International Organization, 66, pp. 211-41.

Santaella, J. A (1993) ‘Stabilization programs and external enforcement: experience from the 1920s’, Staff Papers—International Monetary Fund (J. IMF Econ Rev), 40, pp. 584–621

Sargent, T. J., (1983) ‘The ends of four big inflations’, in R. E. Hall, ed., Inflation: Causes and Effects (Chicago, Ill.: University of Chicago Press, pp. 41–97

Keynes and Actual Investment Decisions in Practice

Keynes and Wall Street

By David Chambers (Judge Business School, Cambridge University) and Ali Kabiri (University of Buckingham)

Abstract: This article examines in detail how John Maynard Keynes approached investing in the U.S. stock market on behalf of his Cambridge College after the 1929 Wall Street Crash. We exploit the considerable archival material documenting his portfolio holdings, his correspondence with investment advisors, and his two visits to the United States in the 1930s. While he displayed an enthusiasm for investing in common stocks, he was equally attracted to preferred stocks. His U.S. stock picks reflected his detailed analysis of company fundamentals and a pronounced value approach. Already in this period, therefore, it is possible to see the origins of some of the investment techniques adopted by professional investors in the latter half of the twentieth century.

Source: Business History Review (2016), 90(2,Summer), pp. 301-328 (Free access from October 4 to 18, 2016).

Reviewed by Janette Rutterford (Open University)

This short article looks at Keynes’ purchases of US securities in the period from after the Wall Street Crash until World War II. The investments the authors discuss are not Keynes’ personal investments but are those relating to the discretionary fund (the ‘Fund’) which formed part of the King’s College, Cambridge endowment fund and which was managed by Keynes. The authors rely for their analysis on previously unused archival material: the annual portfolio holdings of the endowment fund; the annual report on discretionary fund performance provided by Keynes to the endowment fund trustees; correspondence between Keynes and investment experts; and details of two visits by Keynes to the US in 1931 and 1934.

120419_keynes_wuerker_328

The authors look at various aspects of the investments in US securities made by Keynes. They first note the high proportion of equities in the endowment fund as a whole. They then focus in detail on the US holdings which averaged 33% by value of the Fund during the 1930s. They find that Keynes invested heavily in preferred stock, which he believed had suffered relatively more than ordinary shares in the Wall Street Crash and, in particular, where the preference dividends were in arrears. He concentrated on particular sectors – investment trusts, utilities and gold mining – which were all trading at discounts to underlying value, either to do with the amount of leverage or with the price of gold. He also made some limited attempts at timing the market with purchases and sales, though the available archival data for this is limited. The remainder of the paper explores the type of investment advice Keynes sought from brokers, and from those finance specialists and politicians he met on his US visits. The authors conclude that he used outside advice to supplement his own views and that, for the Fund, as far as investment in US securities was concerned, he acted as a long-term investor, making targeted, value investments rather than ‘following the herd’.

Unknown
This paper adds a small element to an area of research which is as yet in its infancy: the analysis of actual investment decision making in practice, and the evolution of investment strategies over time. In terms of strategies, Keynes used both value investing and, to a lesser extent, market timing for the Fund. Keynes was influenced by Lawrence Smith’s 1925 book which recommended equity investment over bond investment on the basis of total returns (dividends plus retained earnings) rather than just dividend yield, the then common equity valuation method. Keynes appears not to have known Benjamin Graham but came to the same conclusion – namely that, post Wall Street Crash, value investing would lead to outperformance. He experimented with market timing in his own personal portfolio but only to a limited extent in the Fund. He was thus an active investor tilting his portfolio away from the market, by ignoring both US and UK railway and banks securities. Another fascinating aspect which is only touched on in this paper is the quality of investment advice at the time. How does it stack up compared to current broker research?

images

The paper highlights the fact that issues which are still not settled today were already a concern before WWII. Should you buy the market or try to outperform? What is the appropriate benchmark portfolio against which to judge an active strategy? How should performance be reported to the client (in this case the trustees) and how often? How can one decide how much outperformance comes from the asset allocation choice of shares over bonds, from the choice of a particular sector, at a particular time, whilst making allowance for forced cash outflows or sales such as occurred during WWII? More research on how these issues were addressed in the past will better inform the current debate.

Lessons from ‘Too Big to Fail’ in the 1980s

Can a bank run be stopped? Government guarantees and the run on Continental Illinois

Mark A Carlson (Bank for International Settlements) and Jonathan Rose (Board of Governors of the Federal Reserve)

Abstract: This paper analyzes the run on Continental Illinois in 1984. We find that the run slowed but did not stop following an extraordinary government intervention, which included the guarantee of all liabilities of the bank and a commitment to provide ongoing liquidity support. Continental’s outflows were driven by a broad set of US and foreign financial institutions. These were large, sophisticated creditors with holdings far in excess of the insurance limit. During the initial run, creditors with relatively liquid balance sheets nevertheless withdrew more than other creditors, likely reflecting low tolerance to hold illiquid assets. In addition, smaller and more distant creditors were more likely to withdraw. In the second and more drawn out phase of the run, institutions with relative large exposures to Continental were more likely to withdraw, reflecting a general unwillingness to have an outsized exposure to a troubled institution even in the absence of credit risk. Finally, we show that the concentration of holdings of Continental’s liabilities was a key dynamic in the run and was importantly linked to Continental’s systemic importance.

URL: http://EconPapers.repec.org/RePEc:bis:biswps:554

Distributed on NEP-HIS 2016-4-16

Review by Anthony Gandy (ifs University College)

I have to thank Bernardo Batiz-Lazo for spotting this paper and circulating it through NEP-HIS, my interest in this is less research focused than teaching focused. Having the honour of teaching bankers about banking, sometimes I am asked questions which I find difficult to answer. One such question has been ‘why are inter-bank flows seen as less volatile, than consumer deposits?’ In this very accessible paper, Carlson and Rose answers this question by analysing the reality of a bank run, looking at the raw data from the treasury department of a bank which did indeed suffer a bank run: Continental Illinois – which became the biggest banking failure in US history when it flopped in 1984.

images

For the business historian, the paper may lack a little character as it rather skimps over the cause of Continental’s demise, though this has been covered by many others, including the Federal Deposit Insurance Corporation (1997). The paper briefly explains the problems Continental faced in building a large portfolio of assets in both the oil and gas sector and developing nations in Latin America. A key factor in the failure of Continental in 1984, was the 1982 failure of the small bank Penn Square Bank of Oklahoma. Cushing, Oklahoma is the, quite literally, hub (and one time bottleneck) of the US oil and gas sector. The the massive storage facility in that location became the settlement point for the pricing of West Texas Intermediate (WTI), also known as Texas light sweet, oil. Penn Square focused on the oil sector and sold assets to Continental, according the FDIC (1997) to the tune of $1bn. Confidence in Continental was further eroded by the default of Mexico in 1982 thus undermining the perceived quality of its emerging market assets.

Depositors queuing outside the insolvent Penn Square Bank (1982)

Depositors queuing outside the insolvent Penn Square Bank (1982)

In 1984 the failure of Penn would translate into the failure of the 7th largest bank in the US, Continental Illinois. This was a great illustration of contagion, but contagion which was contained by the central authorities and, earlier, a panel of supporting banks. Many popular articles on Continental do an excellent job of explaining why its assets deteriorated and then vaguely discuss the concept of contagion. The real value of the paper by Carlson and Rose comes from their analysis of the liability side of the balance sheet (sections 3 to 6 in the paper). Carlson and Rose take great care in detailing the make up of those liabilities and the behaviour of different groups of liability holders. For instance, initially during the crisis 16 banks announced a advancing $4.5bn in short term credit. But as the crisis went forward the regulators (Federal Deposit Insurance Corporation, the Federal Reserve and the Office of the Comptroller of the Currency) were required to step in to provide a wide ranging guarantee. This was essential as the bank had few small depositors who, in turn, could rely on the then $100,000 depositor guarantee scheme.

053014_fmf

It would be very easy to pause and take in the implications of table 1 in the paper. It shows that on the 31st March 1984, Continental had a most remarkable liability structure. With $10.0bn of domestic deposits, it funded most of its books through $18.5bn of foreign deposits, together with smaller amounts of other wholesale funding. However, the research conducted by Carlson and Rose showed that the intolerance of international lenders, did become a factor but it was only one of a number of effects. In section 6 of the paper they look at the impact of funding concentration. The largest ten depositors funded Continental to the tune of $3.4bn and the largest 25 to $6bn dollars, or 16% of deposits. Half of these were foreign banks and the rest split between domestic banks, money market funds and foreign governments.

Initially, `run off’, from the largest creditors was an important challenge. But this was related to liquidity preference. Those institutions which needed to retain a highly liquid position were quick to move their deposits out of Continental. One could only speculate that these withdrawals would probably have been made by money market funds. Only later, in a more protracted run off, which took place even after interventions, does the size of the exposure and distance play a disproportionate role. What is clear is the unwillingness of distant banks to retain exposure to a failing institution. After the initial banking sector intervention and then the US central authority intervention, foreign deposits rapidly decline.

It’s a detailed study, one which can be used to illustrate to students both issues of liquidity preference and the rationale for the structures of the new prudential liquidity ratios, especially the Net Stable Funding Ratio. It can also be used to illustrate the problems of concentration risk – but I would enliven the discussion with the addition of the more colourful experience of Penn Square Bank- a banks famed for drinking beer out of cowboy boots!

References

Federal Deposit Insurance Corporation, 1997. Chapter 7 `Continental Illinois and `Too Big to Fail’ In: History of the Eighties, Lessons for the Future, Volume 1. Available on line at: https://www.fdic.gov/bank/historical/history/vol1.html

More general reads on Continental and Penn Square:

Huber, R. L. (1992). How Continental Bank outsourced its” crown jewels. Harvard Business Review, 71(1), 121-129.

Aharony, J., & Swary, I. (1996). Additional evidence on the information-based contagion effects of bank failures. Journal of Banking & Finance, 20(1), 57-69.

Society? Economics? Politics? Personality? What causes inequality?

What Drives Inequality?

by Jon D. Wisman (American)

Abstract Over the past 40 years, inequality has exploded in the U.S. and significantly increased in virtually all nations. Why? The current debate typically identifies the causes as economic, due to some combination of technological change, globalization, inadequate education, demographics, and most recently, Piketty’s claim that it is the rate of return on capital exceeding the growth rate. But to the extent true, these are proximate causes. They all take place within a political framework in which they could in principle be neutralized. Indeed, this mistake is itself political. It masks the true cause of inequality and presents it as if natural, due to the forces of progress, just as in pre-modern times it was the will of gods. By examining three broad distributional changes in modern times, this article demonstrates the dynamics by which inequality is a political phenomenon through and through. It places special emphasis on the role played by ideology – politics’ most powerful instrument – in making inequality appear as necessary.

Source: http://EconPapers.repec.org/RePEc:amu:wpaper:2015-09

Distributed by NEP-HIS on 2015-10-04

Reviewed by Mark J Crowley

This paper was circulated by NEP-HIS on 2015-05-05.  It explores a topical issue in political discourse at present, in which the debate has largely been categorised into two major camps.  First, the Conservative argument, stretching back to Margaret Thatcher in Britain (and simultaneously championed by Ronald Reagan and Charles Murray in the USA) was that inequality was good and accepted by the populace as a way of categorising and organising the nation.  Their argument, it so followed, ensured that those who were at the lower part of society would be inspired to work harder as a means to lessen their inequality.  The second argument that has now experienced resurgence in the UK following the election of the left wing veteran Jeremy Corbyn to the leadership of the opposition Labour Party is that inequality is an evil in society that punishes the poor for their poverty.  The counter argument is that the richer, which have the broadest shoulders, should bear the heaviest burden in times of hardship, and that austerity should not hit the poorest of society in the hardest way.  Thus a political solution should be sought to ensure a fairer distribution of wealth in favour of the poorest in society.  Similar arguments have been made in the US by proponents of increased state welfare.  It is in this context that the debates highlighted in this paper should be seen.

Thatcher and Reagan were the major architects of a change in economic policy away from state welfare.

Thatcher and Reagan were the major architects of a change in economic policy away from state welfare.

This meticulously researched article demonstrates that inequality as a phenomenon has long roots.  Citing that inequality has virtually been omnipresent in the world since the dawn of civilisation, Wisman couches the argument concerning inequality within the wider organisation and economic hierarchy of society.  Building on the argument of Simon Kuznets that inequality, at the beginning of economic development shows vast differences between rich and poor but subsequently stabilises, he looks at other factors beyond economics that contribute to the growing inequality in society.  The heavy focus on political literature examining the impact of politics on rising inequality is especially interesting, and takes this paper beyond the traditional Marxist arguments that have often been proposed about the failures and flaws of capitalism.  Other arguments, such as the impact of the industrial revolution, are explored in detail and are shown to be significant factors in defining inequality.  This runs as a counter-exploration to the work of Nick Crafts who has explored the extent to which the industrial revolution, especially in Britain, was ‘successful’.

Despite the arguments and debates about why inequality exists, there still appears to be no conclusive answer about its cause.

Despite the arguments and debates about why inequality exists, there still appears to be no conclusive answer about its cause.

Ideology is also a factor that is explored in detail.  The explanations for inequality have often been provided with ideological labels, with some offering proposals for eradicating inequality, while others propose that individuals, and not society, should change in order to reverse the trend.  The latter was forcefully proposed by Margaret Thatcher and Milton Friedman, whereas the former was commonly the battle-cry of post-war socialist-leaning parties (most notably the largely out-of power Labour Party of Britain in the post-war period, with the exception of 1945-51 and brief periods in the 1970s).

The religious argument about helping people who are less fortunate than yourself has now become more tenuous in favour of using religion as a form of legitimizing inequality.

The religious argument about helping people who are less fortunate than yourself has now become more tenuous in favour of using religion as a form of legitimizing inequality.

The exploration of religion as a factor is also particularly interesting here.  Wisman argues that providing state institutions with religious foundations thus legitimises their status, and hereby ensures that inequality has a stronger place in society.  This point, while contentious, has been alluded to in previous literature, but has not been explored in great depth.  The section in this paper on religion is also small, although such is its significance, I am sure the author would seek to expand on this in a later draft.

Critique

This paper is wide-ranging, and shows a large number of factors that have contributed to inequality in the western world, especially the USA.  It highlights the fact that the arguments concerning inequality are more complex than has possibly been previously assumed.  Arguing that politics and economics are intertwined, it effectively argues that a synthesis of these two disciplines are required in order to address the issue of inequality and reduce the gap between rich and poor in society.

I found this article absolutely fascinating.  I can offer very little in terms of suggestions for improvement.  However, one aspect did come to mind, and that was the impact of inequality on individual/collective advancement?  Perhaps this would take the research off into a tangent too far away from the author’s original focus, but the issue that sprung to mind for me was the impact of the inequality mentioned by the author on aspects such as educational attainment and future employment opportunities?  For example, in the UK, the major debate for decades has been the apparent disparity between the numbers of state school and privately-educated students attending the nation’s elite universities, namely Oxbridge.  Arguments have often centred on the assumption that private, fee-paying schools are perceived to be better in terms of educational quality, and thus admissions officers disproportionately favour these students when applying to university.  While official figures show that Oxbridge is made up of a higher proportion of state school student than their privately-educated counterparts, this ignores the fact that over 90% of British students are still educated in the state system.  Furthermore, so the argument goes, those with an elite education then attain the highest-paying jobs and occupy the highest positions in society, thus generating the argument that positions in the judiciary and politics are not representative of the composition of society.  These are complex arguments.  This paper alludes to many of these points concerning the origins of inequality.  Perhaps a future direction of this research would be to apply the models highlighted and apply them to certain examples in society to test their validity?

References

Dorey, Peter, British Conservatism: the Politics and Philosophy of Inequality (London, I. B. Tauris, 2011)

Thane, Pat (ed.) The Origins of British Social Policy (London: Croom Helm ; Totowa, N.J.: Rowman & Littlefield, 1978).

Thane, Pat, The Foundations of the Welfare State, (Harlow: Longman, 1982).