Category Archives: USA

Technology and Financial Inclusion in North America

Did Railroads Make Antebellum U.S. Banks More Sound?

By Jeremy Atack (Vanderbilt), Matthew Steven Jaremski (Colgate), and Peter Rousseau (Vanderbilt).

Abstract: We investigate the relationships of bank failures and balance sheet conditions with measures of proximity to different forms of transportation in the United States over the period from 1830-1860. A series of hazard models and bank-level regressions indicate a systematic relationship between proximity to railroads (but not to other means of transportation) and “good” banking outcomes. Although railroads improved economic conditions along their routes, we offer evidence of another channel. Specifically, railroads facilitated better information flows about banks that led to modifications in bank asset composition consistent with reductions in the incidence of moral hazard.


Review by Bernardo Bátiz-Lazo

Executive briefing

This paper was distributed by NEP-HIS on 2014-04-18. Atack, Jaremski and Rousseau (henceforward AJR) deal with the otherwise thorny issue of causation in the relationship between financial intermediation and economic growth. They focus on bank issued notes rather deposits; and argue for and provide empirical evidence of bi-directional causation based on empirical estimates that combine geography (ie GIS) and financial data. The nature of their reported causation emerges from their approach to railroads as a transport technology that shapes markets while also shaped by its users.


In this paper AJR study the effect of improved means of communication on market integration and particularly whether banks in previously remote areas of pre-Civil War USA had an incentive to over extend their liabilities. AJR’s paper is an important contribution: first, because they focus on bank issued notes and bills rather than deposits to understand how banks financed themselves. Second, because of the dearth of systematic empirical testing whether the improvements in the means of communication affected the operation of banks.


In 19th century north America and in the absence of a central bank, notes from local banks were substitutes among themselves and between them and payment in species. Those in the most remote communities (ie with little or no oversight) had an opportunity to misbehave “in ways that compromised the positions of their liability holders” (behaviour which AJR label “quasi-wildcatting”). Railroads, canals and boats connected communities and enabled better trading opportunities. But ease of communication also meant greater potential for oversight.


ACJ test bank failure rates (banks that didn’t redeem notes at full value), closed banks (ceased operation but redeem at full value), new banks and balance sheet management for 1,818 banks in existence in the US in 5 year increments between 1830 and 1862. Measures of distance between forms of communication (i.e. railroads, canals, steam navegable river, navegable lake and maritime trade) and bank location emerged from overlapping contemporary maps with GIS data. Financial data was collected from annual editions of the “Merchants and Bankers’ Almanac”. They distinguish between states that passed “free banking laws” (from 1837 to the early 1850s) and those that did not. They also considered changes in failure rates and balance sheet variance (applying the so called CAMEL model – to the best of data availability) for locations that had issuing banks before new transport infrastructure and those where banks appear only after new means of communication were deployed:

Improvements in finance over the period also provided a means of payment that promoted increasingly impersonal trade. To the extent that the railroads drew new banks closer to the centers of economic activity and allowed existing banks to participate in the growth opportunities afforded by efficient connections.(p. 2)


Railroads were the only transport technology that returned statistically significant effects. It suggested that the advent of railroads did indeed pushed bankers to reduce the risk in their portfolios. But regardless of transport variables, “[l]arger banks with more reserves, loans, and deposits and fewer bank notes were less likely to fail.” (p.20). It is thus likely that railroads impact banks’ operation as they brought about greater economic diversity, urbanisation and other measures of economic development which translated in larger volume of deposits but also greater scrutiny and oversight. In this sense railroads (as exogenous variable) made banks less likely to fail.

But ACJ note that means of transportation were not necessarily exogenous to banks. Reasons for the endogeneity of transport infrastructure included bankers promoting and investing in railroads to bring them to their communities. Also railways could find advantages to expand into vigorously active locations (where new banks could establish to capture a growing volume of deposits and serve a growing demand for loans).

Other empirical results include banks decreased the amount of excess reserves, notes in circulation and bond holdings while also increased the volume of loans after the arrival of a railroad. In short, considering railroads an endogenous variable also results in transport technologies lowering bank failure rates by encouraging banks to operate more safely.


The work of AJR is part of a growing and increasingly fruitful trend which combines GPS data with other more “traditional” sources. But for me the paper could also inform contemporary debates on payments. Specifically their focus is on banks of issue, in itself a novelty in the history of payment systems. For AJR technological change improves means of payment when it reduces transaction costs by increasing trust on the issuer. But as noted above, there are a number of alternative technologies which have, in principle, equal opportunity to succeed. In this regard AJR state:

Here, we describe a mechanism by which railroads not only affected finance on the extensive margin, but also led to efficiency changes that enhanced the intensity of financial intermediation. And, of course, it is the interaction of the intensity of intermediation along with its quantity that seems most important for long-run growth (Rousseau and Wachtel 1998, 2011). This relationship proves to be one that does not generalize to all types of transportation; rather, railroads seem to have been the only transportation methods that affected banks in this way.(p4)

In other words, financial inclusion and improvements in the payment system interact and enhance economic growth when the former take place through specific forms of technological change. It is the interaction with users that which helps railroads to dominate and effectively change the payments system. Moreover, this process involves changes in the portfolio (and overall level of risk) of individual banks.

The idea that users shape technology is not new to those well versed in the social studies of technology. However, AJR’s argument is novel not only for the study of the economic history of Antibellum America but also when considering that in today’s complex payments ecosystem there are a number or alternatives for digital payments, many of which are based on mobile phones. Yet it would seem that there is greater competition between mobile phone apps than between mobile and other payment solutions (cash and coins, Visa/Mastercard issued credit cards, PayPal, Bitcoin and digital currencies, etc.). AJR results would then suggest that, ceteris paribus, the technology with greater chance to succeed is that which has great bi-directional causality (i.e. significant exogenous and endogenous features). So people’s love for smart phones would suggest mobile payments might have greater chance to change the payment ecosystem than digital currencies (such as Bitcoin), but is early days to decide which of the different mobile apps has greater chance to actually do so.

Wall Street (1867)

Wall Street (1867)

Another aspect in which AJR’s has a contemporary slant refers to security and trust. These are key issues in today’s digital payments debate, yet the possibility of fraud is absence from AJR’s narrative. For this I mean not “wildcatting” but ascertaining whether notes of a trust worthy bank could have been forged. I am not clear how to capture this phenomenon empirically. It is also unlikely that the volume of forged notes of any one trusted issuer was significant. But the point is, as Patrice Baubeau (IDHES-Nanterre) has noted, that in the 19th century the technological effort for fraud was rather simple: a small furnace or a printing press. Yet today that effort is n-times more complex.

AJR also make the point that changes in the payments ecosystem are linked to bank stability and the fragility of the financial system. This is an argument that often escapes those discussing the digital payments debate.

Overall it is a short but well put together paper. It does what it says on the can, and thus highly recommended reading.

Does Bank Competition Lead to Higher Growth?

Bank Deregulation, Competition and Economic Growth: The US Free Banking Experience

By Philipp Ager (University of Southern Denmark) and Fabrizio Spargoli (Erasmus University Rotterdam)


We exploit the introduction of free banking laws in US states during the 1837-1863 period to examine the impact of removing barriers to bank entry on bank competition and economic
growth. As governments were not concerned about systemic stability in this period, we are
able to isolate the effects of bank competition from those of state implicit guarantees. We find
that the introduction of free banking laws stimulated the creation of new banks and led to
more bank failures. Our empirical evidence indicates that states adopting free banking laws
experienced an increase in output per capita compared to the states that retained state bank
chartering policies. We argue that the fiercer bank competition following the introduction of
free banking laws might have spurred economic growth by (1) increasing the money stock
and the availability of credit; (2) leading to efficiency gains in the banking market. Our
findings suggest that the more frequent bank failures occurring in a competitive banking
market do not harm long-run economic growth in a system without public safety nets.


Circulated by NEP-HIS on: 2013-12-29

Review by Natacha Postel-Vinay

In this paper, Philipp Ager (University of Southern Denmark) and Fabrizio Spargoli (Erasmus University Rotterdam) ask two very topical questions. Does increased bank competition lead to higher economic growth? And, if so, how? Following the recent crisis, many have wondered whether the alternative to “too-big-to-fail” — having many smaller banks competing with each other — would necessarily be a better one. Clouding the debate has been the difficulty of finding appropriate historical settings in which to test the hypothesis that more competition leads to greater growth. In their paper, Ager and Spargoli focus on what they consider the best instance of intense bank competition without any implicit government bail-out guarantee: the American free banking era.

Between 1837 and 1863 new laws were passed in a number of states allowing just about anyone to set up a bank, with very few requirements to fulfill. Until then, banks wanting to open needed a charter from their state, for which they had to meet relatively stringent criteria. As the authors show using a new quantitative analytical framework, the new laws greatly increased the creation of new banks in the states which passed them. As competition increased, however, a higher proportion of banks ended up failing. Could it still be the case that the introduction of free banking laws led to greater growth in those states?

A satire on Andrew Jackson's campaign to destroy the Bank of the United States and its support among state banks, 1836. It was partly to fill this gap that states allowed free banking.

A satire on Andrew Jackson’s campaign to destroy the Bank of the United States and its support among state banks, 1836. It was partly to fill this gap that some states allowed free banking.

The paper’s most important finding is that increasing competition among banks did lead to higher economic growth. Jaremski and Rousseau’s 2012 paper (previously reviewed in NEP-HIS here) found that a new “free” bank, as opposed to a charter bank, did not have a positive effect on the local economy. While this is an important finding in itself, it is also important to look at the effect of the introduction of free banking laws on aggregate bank behaviour, if only because the new entry of free banks may alter the willingness of charter banks to enter the market and their behaviour once in the market. Charter banks’ behaviour may in turn alter free banks’ behaviour, and so on. Interestingly, Ager and Spargoli’s study finds that in the aggregate, the acceleration in bank entry and resulting greater competition among all types of banks had a positive effect on economic growth.

To arrive at this conclusion, the authors are careful to include a number of controls. First, there is the possibility that growth opportunities led some states to adopt free banking laws, in which case the authors would face a reverse-causality problem. Hence they conduct a county-level analysis in which they include time-invariant county characteristics and state-specific linear output trends (although perhaps it would be nice to see these output trends going further back in time than 1830). Second, they also control for other laws that states might have introduced at the same time as the free banking ones, which could potentially bias the results. Finally, they control for unobserved heterogeneity between states by examining contiguous counties lying on the border of states that introduced free banking. Their results are robust to these different specifications.

Private Bank Note, Drover’s Bank, Salt Lake City, Utah, $3, 1856

Private Bank Note, Drover’s Bank, Salt Lake City, Utah, $3, 1856

Ager and Spargoli are of course also interested in where this growth came from. They find a positive relationship between the introduction of free banking laws and lending, and conclude that one of the main channels through which this increase in growth occurred was the increase in the availability of credit that greater competition fostered. This story is consistent with the finance-growth nexus literature, which argues that greater (and safer) access to credit is conducive to economic development.

Although this seems perfectly reasonable, it would perhaps have been interesting to see when most of the failures occurred. If, for instance, they mainly occurred towards the end of the period under study, say around the 1857 panic, then it is possible that the negative effects of such failures on subsequent growth would not have been picked up by this study, since it ends in 1860. This leaves open the possibility that the positive relationship between free banking and increased access to credit was not a beneficial one for the economy in the long run. Loan growth (and asset growth more generally) is not always a good thing, as the recent crisis has tended to show.

Private Bank Note, Mechanics Bank, Tennessee, $10, 1854

Private Bank Note, Mechanics Bank, Tennessee, $10, 1854

Overall however, Ager and Spargoli’s paper asks a very important question and offers a solid analysis. A natural next step would be to include output data on the periods preceding and following the free banking era, although the occurrence of the Civil War is an obvious obstacle to this study.



Jaremski, M., and P. L. Rousseau (2012): “Banks, Free Banks, and U.S. Economic Growth,” Economic Inquiry, 51(2), 1603–21.

“A fluid, ever-evolving, and organic process of improvement, misstep and improvement”: The Long Road to Monetary Union in the USA

Politics on the Road to the U. S. Monetary Union

Peter L. Rousseau (, Vanderbilt University


Abstract: Is political unity a necessary condition for a successful monetary union? The early United States seems a leading example of this principle. But the view is misleadingly simple. I review the historical record and uncover signs that the United States did not achieve a stable monetary union, at least if measured by a uniform currency and adequate safeguards against systemic risk, until well after the Civil War and probably not until the founding of the Federal Reserve. Political change and shifting policy positions end up as key factors in shaping the monetary union that did ultimately emerge.

Review by Manuel Bautista Gonzalez

Peter L. Rousseau

Peter L. Rousseau

In this piece published in NEP-HIS 2013-04-13, Peter Rousseau argues for the need to complicate the widely-held, simplistic view that political union is a necessary condition for a successful monetary union. By studying the intersection of politics, money and finance in the United States from the American Revolution to the Great Depression, Rousseau posits that there is no automatic mechanism to ensure the concurrence of political and monetary union.

Although Rousseau begins his paper with a rather narrow definition of monetary union as a “system with a uniform currency and adequate safeguards against systemic risk” (Rousseau 2013: 1), he expands it throughout the paper to take into account other characteristics and consequences of processes of monetary unification.

The most obvious element of monetary union is the adoption of a single unit of account and uniform currency throughout a territory. To function, monetary union requires credible authorities with effective powers to control the supply of money while properly backing liabilities to minimize uncertainty. To reduce transactions costs, monetary union also demands institutional arrangements between the government and the banking system as a private supplier of means of payment. With monetary union, short-term and long-term capital markets become part of a payments system, whereby due to network effects, the reduction of borrowing costs in regular conditions can meet rapidly-spreading liquidity squeezes in times of financial distress. To recapitulate, monetary union has a dual, difficult nature, for it requires the virtuous alignment of public and private interests; henceforth, politics will mold for better or for worse the actual operation of any monetary union.

Continue reading

Patents, Super Patents and Innovation at Regional Level

Related Variety, Unrelated Variety and Technological Breakthroughs: An analysis of U.S. state-level patenting

By Carolina Castaldi  (, School of Innovation Sciences, Eindhoven University of Technology

Koen Frenken, ( School of Innovation Sciences, Eindhoven University of Technology

Bart Los, (, Groningen Growth and Development Centre



We investigate how variety affects the innovation output of a region. Borrowing arguments from theories of recombinant innovation, we expect that related variety will enhance innovation as related technologies are more easily recombined into a new technology. However, we also expect that unrelated variety enhances technological breakthroughs, since radical innovation often stems from connecting previously unrelated technologies opening up whole new functionalities and applications. Using patent data for US states in the period 1977-1999 and associated citation data, we find evidence for both hypotheses. Our study thus sheds a new and critical light on the related-variety hypothesis in economic geography.

Review by Anna Missiaia

This paper by Carolina Castaldi, Koen Frenken and Bart Los was distributed by NEP-HIS on 30-03-2013. The paper is not, strictly speaking, an economic or business history paper. However, it provides some very interesting insights on how technological innovation and technological breakthroughs happen. This is a large and expanding field in economic history and on-going research on the economics of innovation, I believe, can be of interest to many of our readers.

Professor Butts and the Self-Operating Napkin: The “Self-Operating Napkin” is activated when the soup spoon (A) is raised to mouth, pulling string (B) and thereby jerking ladle (C) which throws cracker (D) past parrot (E). Parrot jumps after cracker and perch (F) tilts, upsetting seeds (G) into pail (H). Extra weight in pail pulls cord (I), which opens and lights automatic cigar lighter (J), setting off skyrocket (K) which causes sickle (L) to cut string (M) and allow pendulum with attached napkin to swing back and forth, thereby wiping chin. (Rube Goldberg, 1918).

The paper is concerned with the study of how innovation in a region is affected by the connections within its sectors in terms of shared technological competences. The term “variety” conveys this concept. The authors differentiate in two types of variety: related and unrelated variety. The former describes the connection among sectors that are complementary in terms on competences and can easily exchange technological knowledge. Unrelated variety, on the other hand, steams from sectors that do not appear to have complementary technology.

These two different types of variety are useful to distinguish for their effects on innovation. Related variety supports productivity and employment growth at regional level. However, unrelated variety is the one that causes technological breakthroughs, as it brings a completely new type of technology into a sector. In a subsequent stage, unrelated variety becomes related, being absorbed by the new sector.

The paper keeps these two types of variety separate and tests for their effects. The authors use patent data for US states in the period 1977-1999. The methodology implies regressing the number of patents as a proxy for innovation, on measures of related variety, unrelated variety, research and development investment, time trend and state fixed effects.  Variety is measured by looking at the dispersion of the classification of patents within and between technological classes of the patents. The paper also proposes two different regressions, one using the total number of patents as dependent variable and one using the share of superstar patents, which represent patents that lead to breakthrough technologies. Superstar patents are distinguished from “regular” patents according to the distribution of their citations: superstar patents have a fat tail, meaning that they are cited more in later stages of their development compared to regular patents.

A nice contribution of this paper is to measure super patents through their statistical distribution of their citations instead of relying on superimposed criteria such as being on the top 1% or 5% of the citations. The idea here is to distinguish between general innovation (regular patents) and breakthrough innovation (superstar patents). Theory predicts that regular patents will be positively affected by related variety, producing general innovation, while superstar patents will be positively correlated with unrelated variety, producing breakthrough innovation. The empirical analysis nicely confirms the theory.

Technological progress is said to resemble a flight of stairs

The possible shortcomings of the paper are related to the role of geography in the analysis. The sample is at US state level and the underlying implication is that variety in the state affects the number of patents registered in it. There could be, under this assumptions, some issues of spatial dependence. The authors touch upon this point in two parts of the paper: in the methodology section they explain that superstar patents tend to cluster in fewer states that general patents and this pattern requires a different approach for the two types of patents. It would be useful if this issue could be elaborated further by the authors in a future version of the paper.

As for the possible spatial dependence effect among explanatory variables, the authors try to control for the fact that R&D in one state could affect the patent output of neighboring states as well. They construct an adjacency matrix to capture the effect of the R&D effort of neighboring states.

The conclusion is that the analysis is robust to spatial dependence. In spite of this robustness check for spatial dependence, some concerns remain. Restricting the R&D effect only to neighboring states could be a limit, as the effect could not only go through physical proximity, but also through other types of connections: for example, the same firm could have different branches in different non-adjacent states, leading to an influence not captured by the adjacency matrix.

In short, this paper provides a very interesting insight on how two types of innovations can arise as measured by patent citations at regional level. The results are consistent with the theory and could be useful to future research in historical perspective. A further improvement of the paper could be to conduct more robustness check on the geographical aspects of these results, especially expanding them to non-adjacent states.

Images of the future technology – The Jetsons, 1962

Fiscal Policy during high unemployment periods: still a bad idea?

Are Government Spending Multipliers Greater During Periods of Slack? Evidence from 20th Century Historical Data

Michael T. Owyang, Valerie A. Ramey, Sarah Zubairy


A key question that has arisen during recent debates is whether government spending multipliers are larger during times when resources are idle. This paper seeks to shed light on this question by analyzing new quarterly historical data covering multiple large wars and depressions in the U.S. and Canada. Using an extension of Ramey’s (2011) military news series and Jordà’s (2005) method for estimating impulse responses, we find no evidence that multipliers are greater during periods of high unemployment in the U.S. In every case, the estimated multipliers are below unity. We do find some evidence of higher multipliers during periods of slack in Canada, with some multipliers above unity.


Review by Sebastian Fleitas

For a very long time the size of the expenditure multipliers has been one of the most vivid economic debates. For instance as recently as 2009, when the Obama administration proposed a fiscal stimulus package, there was a heated discussion regarding the relative size of the expenditure and tax multipliers. The reason fuelling this narrative is perhaps clear: ascertaining the potential impact of a particular proposed measure is key when designing the fiscal policy.

The paper by Owyang, Ramey and Zubairy, which was distributed by NEP-HIS on 2013-02-08 tries to answer this question: Are government spending multipliers greater during periods of slack for the US and Canada when we look at the historical data? The argument behind it is to consider that the expenditure multipliers will be greater in times of crisis, that is, during periods without full employment of labor and capital in the economy. This argument follows the idea that to wake up animal instincts, you need to have something in the forest when guys go out to hunt.

Image in Barro's comment on expenditure multipliers debate on 2009 in Stanford blog.

Image in Barro’s comment on expenditure multipliers debate in 2009 in Hoover Institution Stanford University’s blog (

The answer that the authors offer is counterintuitive, which makes the paper very interesting. They find that the expenditure multipliers were higher in periods with high unemployment in Canada but they were the same for both periods in the US. To arrive to this conclusion the authors first construct high frequency (quarterly) historical data for the US and Canada. The procedure they follow to build the database is documented in an online available annex of the paper (here). After this process they have data on GPD, GDP deflator, government spending and the unemployment rate for the period 1890q1 to 2010q4 for the US and from 1921q1 to 2011q4 for Canada. The other key variable is the “news” variable, which reflects the changes in expected present value of government spending in response to military events as in Ramey (2011), which in turns directs to Ramey (2009).


Regarding the econometric approach, the authors use Jorda’s (2005) local projection technique to calculate impulse responses. The idea in Jorda (2005) is that, in contrast to VAR approaches which  linearly approximate the data generating process to produce optimal one period forecasts, when we are looking at impulse response analysis we should care about the estimation of longer horizons. In this context, it is a better approach to estimate the impulse responses consistently by a sequence of projections of the endogenous variables shifted forward in time onto their lags using ordinary least squares (OLS) with standard errors addressing heterogeneity and serial correlation. The authors estimate a set of OLS regressions of different number of leads of the log of per capita government expenditure and GDP, over their lags and the variable news for periods with high and low unemployment and a quadratic trend. The coefficient for the variable “news” is the impulse response at that certain number of lags.

Finally, the paper made me think of three comments. First of all, the paper shows a very interesting and creative way to proceed when the data needed for the study is actually not available for that historical period. Besides combining sources of information, the authors constructed quarterly series of the variables. Since the paper was prepared for the American Economic Review Paper and Proceedings, it is a very short paper and the procedure to construct the variables is explained not in the paper but in the Annex. Given the lack of data, assumptions about the data generating process must be made. However, and besides the obvious limitation of space, the reader could miss an explanation about the assumptions that are made in the methods used and, also, what implications these assumptions have for the results, in particular about what is the source of variation that allows the identification of the coefficients. Maybe a section in the paper or in the appendix discussing these issues can shed light about what are the potential problems of different assumptions.

The last two comments are related to what is exactly the interpretation of the results. The first one directly follows from the last sentence of the paper. The authors state that they do not adjust for the fact that taxes often rise at the same time as government spending, which turns these multipliers not equal to pure deficit financed multipliers. However, it seems plausible that the effect of the multiplier on the GPD depends on whether this increase in the government was financed by taxes or by debt. If that is the case, and if the episodes when the former and the latter happen are mixed in a non-random way between the periods of high and low unemployment, then it is possible that the value of the coefficients can reflect not only the effect of the exogenous shock but also the effect of different ways to finance it.

A joke?

A joke?

The last comment relates to the consistent estimation of the parameters of the model. In the paper the “news” about military expenditure is taken as the only source of exogenous shock in this economy during the period of two years, four years and the time of the peak of each response. This “news” variable reflects exogenous innovations to the expenditure from a military source. However, it would be relevant for the paper to discuss the existence of other (non-military) sources of exogenous shocks to the expenditure. The relevance of this issue is because, given that the estimation of the parameters of interest is done by OLS, the consistency of the estimates requires zero covariance between the ¨news” and the error term of the equation, and this assumption can be violated if there exist this kind of non-military shocks and they are correlated to military “news”.

Overall I think this is a very interesting paper because of the results they find and also because of the construction of historical data. I found the results very puzzling and therefore a big motivation to continue trying to understand the relationship between GDP and public expenditure.

Sunbeam gets toasted

Accounting fraud, business failure and creative auditing: A micro-analysis of the strange case of Sunbeam Corp.

Marisa Agostini ( and Giovanni Favero (
(Both at Department of Management, Università Ca’ Foscari Venezia, Italy)

This paper puts under the magnifying glass the path to failure of Sunbeam Corp. and emphasizes the reasons of its singularity and exceptionality. This corporate case emerges as an outlier from the analysis of the US fraud cases mentioned by WebBRD: the consideration of the time between fraud disclosure and the final bankruptcy reveals the presence of an exceptional sampled case. In fact, the maximum value of this temporal variable is estimated equal to 840 days: it is really far from the range estimated by the survival function for the entire sample and it refers to Sunbeam Corp. Different hypotheses are evaluated in the paper, starting from the consideration of Sunbeam’s history peculiarities: fraud duration, scapegoating and creative auditing represent the three main points of analysis. Starting from a micro-analysis of this case that the SEC investigated in depth and this work describes in detail, inputs for future research are then provided about more general problems concerning auditing and accounting fraud.


Review by Masayoshi Noguchi

This paper was distributed by NEP-HIS on 30 September 2012. It was also distributed by other NEP reports, namely Accounting (nep-acc), Heterodox Microeconomics (nep-hme) and Informal & Underground Economics (nep-iue).

Agostini and Favero use the case study method to raise questions and considerations concerning the accounting of fraud. Their analytical focus is the company now named Sunbeam Products Inc. It was established in 1897 as the Chicago Flexible Shaft Company by John K. Stewart and Thomas Clark. Its first ‘Sunbeam’ branded household appliance, the Princess Electric Iron, was launched in 1910 and following the success of this line of products the company officially change its name to ‘Sunbeam’ in 1946.

Wikipedia informs us that ‘in 1996, Albert J. Dunlap was recruited to be CEO and Chairman of what was then called Sunbeam-Oster. In 1997, Sunbeam reported massive increases in sales for its various backyard and kitchen items. Dunlap purchased controlling interest in Coleman and Signature Brands (acquiring Mr. Coffee and First Alert) during this time. Stock soared to $52 a share. However, industry insiders were suspicious. The sudden surge in demand for barbecues did not hold up under scrutiny. An internal investigation revealed that Sunbeam was in severe crisis, and that Dunlap had encouraged violations of accepted accounting rules. Dunlap was fired, and under a new CEO, Jerry W. Levin, the company filed for Chapter 11 bankruptcy protection in 2001. In 2002, Sunbeam emerged from bankruptcy as American Household, Inc. (AHI), a privately held company. Its former household products division became the subsidiary Sunbeam Products, Inc. Then AHI was purchased in September 2004 by the Jarden Corporation, of which it is now a subsidiary.’

Al ‘Chainsaw’ Dunlap

Agostini and Favero look at this situation in detail while aiming to show ‘how the specific fraudulent strategy of performance overstatement adopted in the Sunbeam case can be connected to the peculiar modality of its disclosure, allowing to scapegoat the CEO, to (temporarily) discharge the board and the company of any responsibility, and to pursue a business recovery’ (p. 4).

By examining what they consider an exceptional case, Agostini and Favero aim to avoid over simplification and ‘not to sacrifice knowledge of individual elements to wider generalization’, but to be coupled with the informed use of ‘all forms of abstraction since minimal facts and individual cases can serve to reveal more general phenomena’ (p.4). The reason for examining this single outlier case is that, in their view, ‘“deviant cases” follow a peculiar path-dependent logic where early contingent events set cases on an historical trajectory of change that diverges from theoretical expectations’ (p. 2). By so doing, Agostini and Favero aim to ‘enlighten causal mechanisms which are too complex to emerge from standard empirical studies based on statistical approaches’ (p. 4).

The case documents the very aggressive management strategies of Dunlap. As mentioned, these led to fraudulent financial reporting through the misstatement of significant amounts in the financial accounts. In other words, Dunlap was found to have manipulated accounting numbers in numerous ways, skilfully covering these up through the acquisitions of new subsidiaries. Measures were also taken to assure the survival of the company after revelations of the fraud emerged. But in spite of scapegoating, rather tyranic management and the extremely long duration of the fraud the company final reached bankruptcy.

Normally auditors are integral (either by action or omission) to the process leading to accounting fraud (see for instance my work with Bernardo on the auditing of building societies here). But the case of Sunbeam was exceptional in the sense that its auditor, Arthur Andersen, avoided being involved in the crisis (but shortly after were intimate involved in the infamous Enron case). Agostini and Favero point out that ‘[t]his represents another item of exceptionality in Sunbeam Corp. case where there is a shift from the auditors to the CEO of the scapegoat function’ (p. 9). They further add that it was indeed the ‘auditors’ peculiar behaviour that which led to Dunlap being ‘the scapegoat’ (p. 9).

From the late 1940’s to 1997, the upscale toaster market was dominated by the ‘Radiant Control Toaster’ from Sunbeam.

To explore the point above the authors propose the concept of ‘creative auditing’ in comparison with the counterpart of ‘creative accounting’ or ‘earnings management’. According to Agostini and Favero, ‘auditors (agents) may use their professional knowledge, the asymmetrical information and the flexibility inside auditing rules to distract the principals’ attention (owners, shareholders, investors, etc.) from news which will not be welcome’ (p. 14). Agostini and Favero argue that ‘auditors working with management of the company are privy to essential information that can be used in a legal, but not proper way, to maximize their own interests at the expense of the principal’ (p. 14) by citing that ‘Prior to scandal, many assumed that either legal liability or reputational concerns would prevent the large audit firms from engaging in collusion with their clients. Enron and the many frauds that followed have undermined these assumptions’ (p. 14) from Brown (2007, p. 178)

In spite of having effectively discovered the accounting fraud at Sunbeam, the partner in charge of Arthur Andersen, Phillip E. Harlow, signed clean audit report on the ground that ‘the part, which was not presented fairly, was not material, so it did not matter’(p. 22). Agostini and Favero further claim:

After Sunbeam fraud disclosure, Mr. Harlow was supported by its partners at Arthur Andersen, which stated that this case involved not fraud, but “professional disagreements about the application of sophisticated accounting standards.” As emphasized by The New York Times (May 18, 2001), “in the typical accounting fraud case, the auditors say they were fooled. Here, at least according to the S.E.C., the auditors discovered a substantial part of what the commission calls sham profits”. Moreover, stating the immateriality of a part of improper profits, they used their professional knowledge, the asymmetrical information and the flexibility inside auditing rules to distract other stakeholders’ attention from news which will not be welcome.

However, the above indication only refers to the technical nature of the accounting fraud committed and the professional judgment exercised for the degree of materiality. In order to consider the case of Sunbeam as an incident of creative auditing (as Agostini and Favero claim it is), elucidations as to the supposed for Arthur Andersen participating in the fraudulent scheme are insufficient. An improvement on this point would be desirable. Although one can fully agree with their view that the role of auditors for the financial reporting of business enterprises should be reexamined. This paper is thought provoking in this sense.

Must we question corporate rule?

Financialization of the U.S. corporation: what has been lost, and how it can be regained

William Lazonick (University of Massachusetts-Lowell)

The employment problems that the United States now faces are largely structural. The structural problem is not, however, as many economists have argued, a labor-market mismatch between the skills that prospective employers want and the skills that potential workers have. Rather the employment problem is rooted in changes in the ways that U.S. corporations employ workers as a result of “rationalization”, “marketization”, and “globalization”. From the early 1980s rationalization, characterized by plant closings, eliminated the jobs of unionized blue-collar workers. From the early 1990s marketization, characterized by the end of a career with one company as an employment norm, placed the job security of middle-aged and older white-collar workers in jeopardy. From the early 2000s globalization, characterized by the movement of employment offshore, left all members of the U.S. labor force, even those with advanced educational credentials and substantial work experience, vulnerable to displacement. Nevertheless, the disappearance of these existing middle-class jobs does not explain why, in a world of technological change, U.S. business corporations have failed to use their substantial profits to invest in new rounds of innovation that can create enough new high value-added jobs to replace those that have been lost. I attribute that organizational failure to the financialization of the U.S. corporation. The most obvious manifestation of financialization is the phenomenon of the stock buyback, with which major U.S. corporations seek to manipulate the market prices of their own shares. For the decade 2001-2010 the companies in the S&P 500 Index expended about $3 trillion on stock repurchases. The prime motivation for stock buybacks is the stock-based pay of the corporate executives who make these allocation decisions. The justification for stock buybacks is the erroneous ideology, inherited from the conventional theory of the market economy, that, for superior economic performance, companies should be run to “maximize shareholder value”. In this essay I summarize the damage that this ideology is doing to the U.S. economy, and I lay out a policy agenda for restoring equitable and stable economic growth.


Review by Bernardo Bátiz-Lazo

As I have noted before (see Bátiz-Lazo and Reese, 2010), financialisation has been coined to encompass greater involvement of countries, business and people with financial markets and in particular increasing levels of debt (i.e. leverage). For instance, Manning (2000) has used the term to describe micro-phenomena such as the growth of personal leverage amongst US consumers.

In their path breaking study, Froud et al. (2006) use the term to describe how large, non-financial, multinational organisations come to rely on financial services rather than their core business for sustained profitability. They document a pattern of accumulation in which profit making occurs increasingly through financial channels rather than through trade and commodity production.

Instead, in the preface to his edited book, Epstein (2005) notes the use of the term as the ascendancy of “shareholder value” as a mode of corporate governance; or the growing dominance of capital market financial systems over bank-based financial systems.

Alternative view is offered by American writer and commentator Kevin Phillips, who coined a sociological and political interpretation of financialisation as “a process whereby financial services, broadly construed, take over the dominant economic, cultural, and political role in a national economy.” (Phillips 2006, 268). The rather narrow point I am making here and which I fail to elaborate for space concerns, is that ascertaining the essential nature of financialisation is highly contested and is in need of attention.

Sidestepping conceptual issues (and indeed ignoring a large number of contributors to the area), in this paper William Lazonick adopts a view of financialization cum corporate governance and offers broad-base arguments (many based on his own previous research) to explore a relatively recent phenomenon: the demise of the middle class in the US in the late 20th century. In this sense, the abstract is spot on and the paper “does what it says on the can”. Yet purist would consider this too recent to be history. Indeed, the paper was distributed by nep-hme (heterodox microeconomics) on 2012-11-11 rather than NEP-HIS. This out of neglect rather than design but goes on to show that the keywords and abstract were initially not on my radar.

William Lazonick

Others may find easy to poke the broad-stroke arguments that support Lazonick’s argument. Yet the article was honoured with the 2010 Henrietta Larson Article Award for the best paper in the Business History Review and was part of a conference organised by Lazonick at the Ford Foundation in New York City on December 6-7, 2012 (see program at the Financial Institutions for Innovation and Development website).

Lazonick points to the erotion of middle class jobs in a period of rapid technological change. This at a time when others question whether the rate of innovation can continue (see for instance The great innovation debate). Lazonick implicitly considers our age as the most innovative ever. But his argument is that the way in which the latest wave of innovation was financed is at the hear of the accompanying ever-growing economic inequality.

So for all its short comings, Lazonick offers a though provoking paper. One that challenges business historians to link with discussions elsewhere and in particular corporate governance, political economy and the sociology of finance. It can, potentially, launch a more critical stream of literature in business history.


Bátiz-Lazo, B. and Reese, C. (2010) ‘Is the future of the ATM past?’ in Alexandros-Andreas Kyrtsis (ed.) Financial Markets and Organizational Technologies: System Architectures, Practices and Risks in the Era of Deregulation, Basignstoke: Palgrave-Macmillan, pp. 137-65.

Epstein, G. A. (2005). Financialization and The World Economy. Cheltenham, Edward Elgar Publishing.

Froud, J., S. Johal, A. Leaver and K. Williams (2006). Financialization and Strategy: Narrative and Numbers. London, Routledge.

Manning, R. D. (2000). Credit Card Nation. New York, Basic Books.

Phillips, K. (2006). American Theocracy: The Peril and Politics of Radical Religion, Oil, and Borrowed Money in the 21st Century. London, Penguin.