Category Archives: USA

Does Bank Competition Lead to Higher Growth?

Bank Deregulation, Competition and Economic Growth: The US Free Banking Experience

By Philipp Ager (University of Southern Denmark) and Fabrizio Spargoli (Erasmus University Rotterdam)

Abstract

We exploit the introduction of free banking laws in US states during the 1837-1863 period to examine the impact of removing barriers to bank entry on bank competition and economic
growth. As governments were not concerned about systemic stability in this period, we are
able to isolate the effects of bank competition from those of state implicit guarantees. We find
that the introduction of free banking laws stimulated the creation of new banks and led to
more bank failures. Our empirical evidence indicates that states adopting free banking laws
experienced an increase in output per capita compared to the states that retained state bank
chartering policies. We argue that the fiercer bank competition following the introduction of
free banking laws might have spurred economic growth by (1) increasing the money stock
and the availability of credit; (2) leading to efficiency gains in the banking market. Our
findings suggest that the more frequent bank failures occurring in a competitive banking
market do not harm long-run economic growth in a system without public safety nets.

URL: http://d.repec.org/n?u=RePEc:hes:wpaper:0050&r=his

Circulated by NEP-HIS on: 2013-12-29

Review by Natacha Postel-Vinay

In this paper, Philipp Ager (University of Southern Denmark) and Fabrizio Spargoli (Erasmus University Rotterdam) ask two very topical questions. Does increased bank competition lead to higher economic growth? And, if so, how? Following the recent crisis, many have wondered whether the alternative to “too-big-to-fail” — having many smaller banks competing with each other — would necessarily be a better one. Clouding the debate has been the difficulty of finding appropriate historical settings in which to test the hypothesis that more competition leads to greater growth. In their paper, Ager and Spargoli focus on what they consider the best instance of intense bank competition without any implicit government bail-out guarantee: the American free banking era.

Between 1837 and 1863 new laws were passed in a number of states allowing just about anyone to set up a bank, with very few requirements to fulfill. Until then, banks wanting to open needed a charter from their state, for which they had to meet relatively stringent criteria. As the authors show using a new quantitative analytical framework, the new laws greatly increased the creation of new banks in the states which passed them. As competition increased, however, a higher proportion of banks ended up failing. Could it still be the case that the introduction of free banking laws led to greater growth in those states?

A satire on Andrew Jackson's campaign to destroy the Bank of the United States and its support among state banks, 1836. It was partly to fill this gap that states allowed free banking.

A satire on Andrew Jackson’s campaign to destroy the Bank of the United States and its support among state banks, 1836. It was partly to fill this gap that some states allowed free banking.

The paper’s most important finding is that increasing competition among banks did lead to higher economic growth. Jaremski and Rousseau’s 2012 paper (previously reviewed in NEP-HIS here) found that a new “free” bank, as opposed to a charter bank, did not have a positive effect on the local economy. While this is an important finding in itself, it is also important to look at the effect of the introduction of free banking laws on aggregate bank behaviour, if only because the new entry of free banks may alter the willingness of charter banks to enter the market and their behaviour once in the market. Charter banks’ behaviour may in turn alter free banks’ behaviour, and so on. Interestingly, Ager and Spargoli’s study finds that in the aggregate, the acceleration in bank entry and resulting greater competition among all types of banks had a positive effect on economic growth.

To arrive at this conclusion, the authors are careful to include a number of controls. First, there is the possibility that growth opportunities led some states to adopt free banking laws, in which case the authors would face a reverse-causality problem. Hence they conduct a county-level analysis in which they include time-invariant county characteristics and state-specific linear output trends (although perhaps it would be nice to see these output trends going further back in time than 1830). Second, they also control for other laws that states might have introduced at the same time as the free banking ones, which could potentially bias the results. Finally, they control for unobserved heterogeneity between states by examining contiguous counties lying on the border of states that introduced free banking. Their results are robust to these different specifications.

Private Bank Note, Drover’s Bank, Salt Lake City, Utah, $3, 1856

Private Bank Note, Drover’s Bank, Salt Lake City, Utah, $3, 1856

Ager and Spargoli are of course also interested in where this growth came from. They find a positive relationship between the introduction of free banking laws and lending, and conclude that one of the main channels through which this increase in growth occurred was the increase in the availability of credit that greater competition fostered. This story is consistent with the finance-growth nexus literature, which argues that greater (and safer) access to credit is conducive to economic development.

Although this seems perfectly reasonable, it would perhaps have been interesting to see when most of the failures occurred. If, for instance, they mainly occurred towards the end of the period under study, say around the 1857 panic, then it is possible that the negative effects of such failures on subsequent growth would not have been picked up by this study, since it ends in 1860. This leaves open the possibility that the positive relationship between free banking and increased access to credit was not a beneficial one for the economy in the long run. Loan growth (and asset growth more generally) is not always a good thing, as the recent crisis has tended to show.

Private Bank Note, Mechanics Bank, Tennessee, $10, 1854

Private Bank Note, Mechanics Bank, Tennessee, $10, 1854

Overall however, Ager and Spargoli’s paper asks a very important question and offers a solid analysis. A natural next step would be to include output data on the periods preceding and following the free banking era, although the occurrence of the Civil War is an obvious obstacle to this study.

 

References

Jaremski, M., and P. L. Rousseau (2012): “Banks, Free Banks, and U.S. Economic Growth,” Economic Inquiry, 51(2), 1603–21.

“A fluid, ever-evolving, and organic process of improvement, misstep and improvement”: The Long Road to Monetary Union in the USA

Politics on the Road to the U. S. Monetary Union

Peter L. Rousseau (peter.l.rousseau@vanderbilt.edu), Vanderbilt University

URL: http://econpapers.repec.org/paper/vanwpaper/vuecon-sub-13-00006.htm

Abstract: Is political unity a necessary condition for a successful monetary union? The early United States seems a leading example of this principle. But the view is misleadingly simple. I review the historical record and uncover signs that the United States did not achieve a stable monetary union, at least if measured by a uniform currency and adequate safeguards against systemic risk, until well after the Civil War and probably not until the founding of the Federal Reserve. Political change and shifting policy positions end up as key factors in shaping the monetary union that did ultimately emerge.

Review by Manuel Bautista Gonzalez

Peter L. Rousseau

Peter L. Rousseau

In this piece published in NEP-HIS 2013-04-13, Peter Rousseau argues for the need to complicate the widely-held, simplistic view that political union is a necessary condition for a successful monetary union. By studying the intersection of politics, money and finance in the United States from the American Revolution to the Great Depression, Rousseau posits that there is no automatic mechanism to ensure the concurrence of political and monetary union.

Although Rousseau begins his paper with a rather narrow definition of monetary union as a “system with a uniform currency and adequate safeguards against systemic risk” (Rousseau 2013: 1), he expands it throughout the paper to take into account other characteristics and consequences of processes of monetary unification.

The most obvious element of monetary union is the adoption of a single unit of account and uniform currency throughout a territory. To function, monetary union requires credible authorities with effective powers to control the supply of money while properly backing liabilities to minimize uncertainty. To reduce transactions costs, monetary union also demands institutional arrangements between the government and the banking system as a private supplier of means of payment. With monetary union, short-term and long-term capital markets become part of a payments system, whereby due to network effects, the reduction of borrowing costs in regular conditions can meet rapidly-spreading liquidity squeezes in times of financial distress. To recapitulate, monetary union has a dual, difficult nature, for it requires the virtuous alignment of public and private interests; henceforth, politics will mold for better or for worse the actual operation of any monetary union.

Continue reading

Patents, Super Patents and Innovation at Regional Level

Related Variety, Unrelated Variety and Technological Breakthroughs: An analysis of U.S. state-level patenting

By Carolina Castaldi  (c.castaldi@tue.nl), School of Innovation Sciences, Eindhoven University of Technology

Koen Frenken, (k.frenken@tue.nl) School of Innovation Sciences, Eindhoven University of Technology

Bart Los, (b.los@rug.nl), Groningen Growth and Development Centre

URL: http://econpapers.repec.org/paper/eguwpaper/1302.htm

Abstract

We investigate how variety affects the innovation output of a region. Borrowing arguments from theories of recombinant innovation, we expect that related variety will enhance innovation as related technologies are more easily recombined into a new technology. However, we also expect that unrelated variety enhances technological breakthroughs, since radical innovation often stems from connecting previously unrelated technologies opening up whole new functionalities and applications. Using patent data for US states in the period 1977-1999 and associated citation data, we find evidence for both hypotheses. Our study thus sheds a new and critical light on the related-variety hypothesis in economic geography.

Review by Anna Missiaia

This paper by Carolina Castaldi, Koen Frenken and Bart Los was distributed by NEP-HIS on 30-03-2013. The paper is not, strictly speaking, an economic or business history paper. However, it provides some very interesting insights on how technological innovation and technological breakthroughs happen. This is a large and expanding field in economic history and on-going research on the economics of innovation, I believe, can be of interest to many of our readers.

Professor Butts and the Self-Operating Napkin: The “Self-Operating Napkin” is activated when the soup spoon (A) is raised to mouth, pulling string (B) and thereby jerking ladle (C) which throws cracker (D) past parrot (E). Parrot jumps after cracker and perch (F) tilts, upsetting seeds (G) into pail (H). Extra weight in pail pulls cord (I), which opens and lights automatic cigar lighter (J), setting off skyrocket (K) which causes sickle (L) to cut string (M) and allow pendulum with attached napkin to swing back and forth, thereby wiping chin. (Rube Goldberg, 1918).

The paper is concerned with the study of how innovation in a region is affected by the connections within its sectors in terms of shared technological competences. The term “variety” conveys this concept. The authors differentiate in two types of variety: related and unrelated variety. The former describes the connection among sectors that are complementary in terms on competences and can easily exchange technological knowledge. Unrelated variety, on the other hand, steams from sectors that do not appear to have complementary technology.

These two different types of variety are useful to distinguish for their effects on innovation. Related variety supports productivity and employment growth at regional level. However, unrelated variety is the one that causes technological breakthroughs, as it brings a completely new type of technology into a sector. In a subsequent stage, unrelated variety becomes related, being absorbed by the new sector.

The paper keeps these two types of variety separate and tests for their effects. The authors use patent data for US states in the period 1977-1999. The methodology implies regressing the number of patents as a proxy for innovation, on measures of related variety, unrelated variety, research and development investment, time trend and state fixed effects.  Variety is measured by looking at the dispersion of the classification of patents within and between technological classes of the patents. The paper also proposes two different regressions, one using the total number of patents as dependent variable and one using the share of superstar patents, which represent patents that lead to breakthrough technologies. Superstar patents are distinguished from “regular” patents according to the distribution of their citations: superstar patents have a fat tail, meaning that they are cited more in later stages of their development compared to regular patents.

A nice contribution of this paper is to measure super patents through their statistical distribution of their citations instead of relying on superimposed criteria such as being on the top 1% or 5% of the citations. The idea here is to distinguish between general innovation (regular patents) and breakthrough innovation (superstar patents). Theory predicts that regular patents will be positively affected by related variety, producing general innovation, while superstar patents will be positively correlated with unrelated variety, producing breakthrough innovation. The empirical analysis nicely confirms the theory.

Technological progress is said to resemble a flight of stairs

The possible shortcomings of the paper are related to the role of geography in the analysis. The sample is at US state level and the underlying implication is that variety in the state affects the number of patents registered in it. There could be, under this assumptions, some issues of spatial dependence. The authors touch upon this point in two parts of the paper: in the methodology section they explain that superstar patents tend to cluster in fewer states that general patents and this pattern requires a different approach for the two types of patents. It would be useful if this issue could be elaborated further by the authors in a future version of the paper.

As for the possible spatial dependence effect among explanatory variables, the authors try to control for the fact that R&D in one state could affect the patent output of neighboring states as well. They construct an adjacency matrix to capture the effect of the R&D effort of neighboring states.

The conclusion is that the analysis is robust to spatial dependence. In spite of this robustness check for spatial dependence, some concerns remain. Restricting the R&D effect only to neighboring states could be a limit, as the effect could not only go through physical proximity, but also through other types of connections: for example, the same firm could have different branches in different non-adjacent states, leading to an influence not captured by the adjacency matrix.

In short, this paper provides a very interesting insight on how two types of innovations can arise as measured by patent citations at regional level. The results are consistent with the theory and could be useful to future research in historical perspective. A further improvement of the paper could be to conduct more robustness check on the geographical aspects of these results, especially expanding them to non-adjacent states.

Images of the future technology – The Jetsons, 1962

Fiscal Policy during high unemployment periods: still a bad idea?

Are Government Spending Multipliers Greater During Periods of Slack? Evidence from 20th Century Historical Data

Michael T. Owyang, Valerie A. Ramey, Sarah Zubairy

Abstract

A key question that has arisen during recent debates is whether government spending multipliers are larger during times when resources are idle. This paper seeks to shed light on this question by analyzing new quarterly historical data covering multiple large wars and depressions in the U.S. and Canada. Using an extension of Ramey’s (2011) military news series and Jordà’s (2005) method for estimating impulse responses, we find no evidence that multipliers are greater during periods of high unemployment in the U.S. In every case, the estimated multipliers are below unity. We do find some evidence of higher multipliers during periods of slack in Canada, with some multipliers above unity.

URL: http://econpapers.repec.org/paper/fipfedlwp/2013-004.htm

Review by Sebastian Fleitas

For a very long time the size of the expenditure multipliers has been one of the most vivid economic debates. For instance as recently as 2009, when the Obama administration proposed a fiscal stimulus package, there was a heated discussion regarding the relative size of the expenditure and tax multipliers. The reason fuelling this narrative is perhaps clear: ascertaining the potential impact of a particular proposed measure is key when designing the fiscal policy.

The paper by Owyang, Ramey and Zubairy, which was distributed by NEP-HIS on 2013-02-08 tries to answer this question: Are government spending multipliers greater during periods of slack for the US and Canada when we look at the historical data? The argument behind it is to consider that the expenditure multipliers will be greater in times of crisis, that is, during periods without full employment of labor and capital in the economy. This argument follows the idea that to wake up animal instincts, you need to have something in the forest when guys go out to hunt.

Image in Barro's comment on expenditure multipliers debate on 2009 in Stanford blog.

Image in Barro’s comment on expenditure multipliers debate in 2009 in Hoover Institution Stanford University’s blog (http://www.hoover.org/publications/hoover-digest/article/5401)

The answer that the authors offer is counterintuitive, which makes the paper very interesting. They find that the expenditure multipliers were higher in periods with high unemployment in Canada but they were the same for both periods in the US. To arrive to this conclusion the authors first construct high frequency (quarterly) historical data for the US and Canada. The procedure they follow to build the database is documented in an online available annex of the paper (here). After this process they have data on GPD, GDP deflator, government spending and the unemployment rate for the period 1890q1 to 2010q4 for the US and from 1921q1 to 2011q4 for Canada. The other key variable is the “news” variable, which reflects the changes in expected present value of government spending in response to military events as in Ramey (2011), which in turns directs to Ramey (2009).

PARA BLOG MULTIPLIERS keynes

Regarding the econometric approach, the authors use Jorda’s (2005) local projection technique to calculate impulse responses. The idea in Jorda (2005) is that, in contrast to VAR approaches which  linearly approximate the data generating process to produce optimal one period forecasts, when we are looking at impulse response analysis we should care about the estimation of longer horizons. In this context, it is a better approach to estimate the impulse responses consistently by a sequence of projections of the endogenous variables shifted forward in time onto their lags using ordinary least squares (OLS) with standard errors addressing heterogeneity and serial correlation. The authors estimate a set of OLS regressions of different number of leads of the log of per capita government expenditure and GDP, over their lags and the variable news for periods with high and low unemployment and a quadratic trend. The coefficient for the variable “news” is the impulse response at that certain number of lags.

Finally, the paper made me think of three comments. First of all, the paper shows a very interesting and creative way to proceed when the data needed for the study is actually not available for that historical period. Besides combining sources of information, the authors constructed quarterly series of the variables. Since the paper was prepared for the American Economic Review Paper and Proceedings, it is a very short paper and the procedure to construct the variables is explained not in the paper but in the Annex. Given the lack of data, assumptions about the data generating process must be made. However, and besides the obvious limitation of space, the reader could miss an explanation about the assumptions that are made in the methods used and, also, what implications these assumptions have for the results, in particular about what is the source of variation that allows the identification of the coefficients. Maybe a section in the paper or in the appendix discussing these issues can shed light about what are the potential problems of different assumptions.

The last two comments are related to what is exactly the interpretation of the results. The first one directly follows from the last sentence of the paper. The authors state that they do not adjust for the fact that taxes often rise at the same time as government spending, which turns these multipliers not equal to pure deficit financed multipliers. However, it seems plausible that the effect of the multiplier on the GPD depends on whether this increase in the government was financed by taxes or by debt. If that is the case, and if the episodes when the former and the latter happen are mixed in a non-random way between the periods of high and low unemployment, then it is possible that the value of the coefficients can reflect not only the effect of the exogenous shock but also the effect of different ways to finance it.

A joke?

A joke?

The last comment relates to the consistent estimation of the parameters of the model. In the paper the “news” about military expenditure is taken as the only source of exogenous shock in this economy during the period of two years, four years and the time of the peak of each response. This “news” variable reflects exogenous innovations to the expenditure from a military source. However, it would be relevant for the paper to discuss the existence of other (non-military) sources of exogenous shocks to the expenditure. The relevance of this issue is because, given that the estimation of the parameters of interest is done by OLS, the consistency of the estimates requires zero covariance between the ¨news” and the error term of the equation, and this assumption can be violated if there exist this kind of non-military shocks and they are correlated to military “news”.

Overall I think this is a very interesting paper because of the results they find and also because of the construction of historical data. I found the results very puzzling and therefore a big motivation to continue trying to understand the relationship between GDP and public expenditure.

Sunbeam gets toasted

Accounting fraud, business failure and creative auditing: A micro-analysis of the strange case of Sunbeam Corp.

Marisa Agostini (marisa.agostini@unive.it) and Giovanni Favero (gfavero@unive.it)
(Both at Department of Management, Università Ca’ Foscari Venezia, Italy)

Abstract
This paper puts under the magnifying glass the path to failure of Sunbeam Corp. and emphasizes the reasons of its singularity and exceptionality. This corporate case emerges as an outlier from the analysis of the US fraud cases mentioned by WebBRD: the consideration of the time between fraud disclosure and the final bankruptcy reveals the presence of an exceptional sampled case. In fact, the maximum value of this temporal variable is estimated equal to 840 days: it is really far from the range estimated by the survival function for the entire sample and it refers to Sunbeam Corp. Different hypotheses are evaluated in the paper, starting from the consideration of Sunbeam’s history peculiarities: fraud duration, scapegoating and creative auditing represent the three main points of analysis. Starting from a micro-analysis of this case that the SEC investigated in depth and this work describes in detail, inputs for future research are then provided about more general problems concerning auditing and accounting fraud.

URL http://econpapers.repec.org/paper/vnmwpdman/25.htm

Review by Masayoshi Noguchi

This paper was distributed by NEP-HIS on 30 September 2012. It was also distributed by other NEP reports, namely Accounting (nep-acc), Heterodox Microeconomics (nep-hme) and Informal & Underground Economics (nep-iue).

Agostini and Favero use the case study method to raise questions and considerations concerning the accounting of fraud. Their analytical focus is the company now named Sunbeam Products Inc. It was established in 1897 as the Chicago Flexible Shaft Company by John K. Stewart and Thomas Clark. Its first ‘Sunbeam’ branded household appliance, the Princess Electric Iron, was launched in 1910 and following the success of this line of products the company officially change its name to ‘Sunbeam’ in 1946.

Wikipedia informs us that ‘in 1996, Albert J. Dunlap was recruited to be CEO and Chairman of what was then called Sunbeam-Oster. In 1997, Sunbeam reported massive increases in sales for its various backyard and kitchen items. Dunlap purchased controlling interest in Coleman and Signature Brands (acquiring Mr. Coffee and First Alert) during this time. Stock soared to $52 a share. However, industry insiders were suspicious. The sudden surge in demand for barbecues did not hold up under scrutiny. An internal investigation revealed that Sunbeam was in severe crisis, and that Dunlap had encouraged violations of accepted accounting rules. Dunlap was fired, and under a new CEO, Jerry W. Levin, the company filed for Chapter 11 bankruptcy protection in 2001. In 2002, Sunbeam emerged from bankruptcy as American Household, Inc. (AHI), a privately held company. Its former household products division became the subsidiary Sunbeam Products, Inc. Then AHI was purchased in September 2004 by the Jarden Corporation, of which it is now a subsidiary.’

Al ‘Chainsaw’ Dunlap

Agostini and Favero look at this situation in detail while aiming to show ‘how the specific fraudulent strategy of performance overstatement adopted in the Sunbeam case can be connected to the peculiar modality of its disclosure, allowing to scapegoat the CEO, to (temporarily) discharge the board and the company of any responsibility, and to pursue a business recovery’ (p. 4).

By examining what they consider an exceptional case, Agostini and Favero aim to avoid over simplification and ‘not to sacrifice knowledge of individual elements to wider generalization’, but to be coupled with the informed use of ‘all forms of abstraction since minimal facts and individual cases can serve to reveal more general phenomena’ (p.4). The reason for examining this single outlier case is that, in their view, ‘“deviant cases” follow a peculiar path-dependent logic where early contingent events set cases on an historical trajectory of change that diverges from theoretical expectations’ (p. 2). By so doing, Agostini and Favero aim to ‘enlighten causal mechanisms which are too complex to emerge from standard empirical studies based on statistical approaches’ (p. 4).

The case documents the very aggressive management strategies of Dunlap. As mentioned, these led to fraudulent financial reporting through the misstatement of significant amounts in the financial accounts. In other words, Dunlap was found to have manipulated accounting numbers in numerous ways, skilfully covering these up through the acquisitions of new subsidiaries. Measures were also taken to assure the survival of the company after revelations of the fraud emerged. But in spite of scapegoating, rather tyranic management and the extremely long duration of the fraud the company final reached bankruptcy.

Normally auditors are integral (either by action or omission) to the process leading to accounting fraud (see for instance my work with Bernardo on the auditing of building societies here). But the case of Sunbeam was exceptional in the sense that its auditor, Arthur Andersen, avoided being involved in the crisis (but shortly after were intimate involved in the infamous Enron case). Agostini and Favero point out that ‘[t]his represents another item of exceptionality in Sunbeam Corp. case where there is a shift from the auditors to the CEO of the scapegoat function’ (p. 9). They further add that it was indeed the ‘auditors’ peculiar behaviour that which led to Dunlap being ‘the scapegoat’ (p. 9).

From the late 1940′s to 1997, the upscale toaster market was dominated by the ‘Radiant Control Toaster’ from Sunbeam.

To explore the point above the authors propose the concept of ‘creative auditing’ in comparison with the counterpart of ‘creative accounting’ or ‘earnings management’. According to Agostini and Favero, ‘auditors (agents) may use their professional knowledge, the asymmetrical information and the flexibility inside auditing rules to distract the principals’ attention (owners, shareholders, investors, etc.) from news which will not be welcome’ (p. 14). Agostini and Favero argue that ‘auditors working with management of the company are privy to essential information that can be used in a legal, but not proper way, to maximize their own interests at the expense of the principal’ (p. 14) by citing that ‘Prior to scandal, many assumed that either legal liability or reputational concerns would prevent the large audit firms from engaging in collusion with their clients. Enron and the many frauds that followed have undermined these assumptions’ (p. 14) from Brown (2007, p. 178)

In spite of having effectively discovered the accounting fraud at Sunbeam, the partner in charge of Arthur Andersen, Phillip E. Harlow, signed clean audit report on the ground that ‘the part, which was not presented fairly, was not material, so it did not matter’(p. 22). Agostini and Favero further claim:

After Sunbeam fraud disclosure, Mr. Harlow was supported by its partners at Arthur Andersen, which stated that this case involved not fraud, but “professional disagreements about the application of sophisticated accounting standards.” As emphasized by The New York Times (May 18, 2001), “in the typical accounting fraud case, the auditors say they were fooled. Here, at least according to the S.E.C., the auditors discovered a substantial part of what the commission calls sham profits”. Moreover, stating the immateriality of a part of improper profits, they used their professional knowledge, the asymmetrical information and the flexibility inside auditing rules to distract other stakeholders’ attention from news which will not be welcome.

However, the above indication only refers to the technical nature of the accounting fraud committed and the professional judgment exercised for the degree of materiality. In order to consider the case of Sunbeam as an incident of creative auditing (as Agostini and Favero claim it is), elucidations as to the supposed for Arthur Andersen participating in the fraudulent scheme are insufficient. An improvement on this point would be desirable. Although one can fully agree with their view that the role of auditors for the financial reporting of business enterprises should be reexamined. This paper is thought provoking in this sense.