Category Archives: USA

Patents, Super Patents and Innovation at Regional Level

Related Variety, Unrelated Variety and Technological Breakthroughs: An analysis of U.S. state-level patenting

By Carolina Castaldi  (c.castaldi@tue.nl), School of Innovation Sciences, Eindhoven University of Technology

Koen Frenken, (k.frenken@tue.nl) School of Innovation Sciences, Eindhoven University of Technology

Bart Los, (b.los@rug.nl), Groningen Growth and Development Centre

URL: http://econpapers.repec.org/paper/eguwpaper/1302.htm

Abstract

We investigate how variety affects the innovation output of a region. Borrowing arguments from theories of recombinant innovation, we expect that related variety will enhance innovation as related technologies are more easily recombined into a new technology. However, we also expect that unrelated variety enhances technological breakthroughs, since radical innovation often stems from connecting previously unrelated technologies opening up whole new functionalities and applications. Using patent data for US states in the period 1977-1999 and associated citation data, we find evidence for both hypotheses. Our study thus sheds a new and critical light on the related-variety hypothesis in economic geography.

Review by Anna Missiaia

This paper by Carolina Castaldi, Koen Frenken and Bart Los was distributed by NEP-HIS on 30-03-2013. The paper is not, strictly speaking, an economic or business history paper. However, it provides some very interesting insights on how technological innovation and technological breakthroughs happen. This is a large and expanding field in economic history and on-going research on the economics of innovation, I believe, can be of interest to many of our readers.

Professor Butts and the Self-Operating Napkin: The “Self-Operating Napkin” is activated when the soup spoon (A) is raised to mouth, pulling string (B) and thereby jerking ladle (C) which throws cracker (D) past parrot (E). Parrot jumps after cracker and perch (F) tilts, upsetting seeds (G) into pail (H). Extra weight in pail pulls cord (I), which opens and lights automatic cigar lighter (J), setting off skyrocket (K) which causes sickle (L) to cut string (M) and allow pendulum with attached napkin to swing back and forth, thereby wiping chin. (Rube Goldberg, 1918).

The paper is concerned with the study of how innovation in a region is affected by the connections within its sectors in terms of shared technological competences. The term “variety” conveys this concept. The authors differentiate in two types of variety: related and unrelated variety. The former describes the connection among sectors that are complementary in terms on competences and can easily exchange technological knowledge. Unrelated variety, on the other hand, steams from sectors that do not appear to have complementary technology.

These two different types of variety are useful to distinguish for their effects on innovation. Related variety supports productivity and employment growth at regional level. However, unrelated variety is the one that causes technological breakthroughs, as it brings a completely new type of technology into a sector. In a subsequent stage, unrelated variety becomes related, being absorbed by the new sector.

The paper keeps these two types of variety separate and tests for their effects. The authors use patent data for US states in the period 1977-1999. The methodology implies regressing the number of patents as a proxy for innovation, on measures of related variety, unrelated variety, research and development investment, time trend and state fixed effects.  Variety is measured by looking at the dispersion of the classification of patents within and between technological classes of the patents. The paper also proposes two different regressions, one using the total number of patents as dependent variable and one using the share of superstar patents, which represent patents that lead to breakthrough technologies. Superstar patents are distinguished from “regular” patents according to the distribution of their citations: superstar patents have a fat tail, meaning that they are cited more in later stages of their development compared to regular patents.

A nice contribution of this paper is to measure super patents through their statistical distribution of their citations instead of relying on superimposed criteria such as being on the top 1% or 5% of the citations. The idea here is to distinguish between general innovation (regular patents) and breakthrough innovation (superstar patents). Theory predicts that regular patents will be positively affected by related variety, producing general innovation, while superstar patents will be positively correlated with unrelated variety, producing breakthrough innovation. The empirical analysis nicely confirms the theory.

Technological progress is said to resemble a flight of stairs

The possible shortcomings of the paper are related to the role of geography in the analysis. The sample is at US state level and the underlying implication is that variety in the state affects the number of patents registered in it. There could be, under this assumptions, some issues of spatial dependence. The authors touch upon this point in two parts of the paper: in the methodology section they explain that superstar patents tend to cluster in fewer states that general patents and this pattern requires a different approach for the two types of patents. It would be useful if this issue could be elaborated further by the authors in a future version of the paper.

As for the possible spatial dependence effect among explanatory variables, the authors try to control for the fact that R&D in one state could affect the patent output of neighboring states as well. They construct an adjacency matrix to capture the effect of the R&D effort of neighboring states.

The conclusion is that the analysis is robust to spatial dependence. In spite of this robustness check for spatial dependence, some concerns remain. Restricting the R&D effect only to neighboring states could be a limit, as the effect could not only go through physical proximity, but also through other types of connections: for example, the same firm could have different branches in different non-adjacent states, leading to an influence not captured by the adjacency matrix.

In short, this paper provides a very interesting insight on how two types of innovations can arise as measured by patent citations at regional level. The results are consistent with the theory and could be useful to future research in historical perspective. A further improvement of the paper could be to conduct more robustness check on the geographical aspects of these results, especially expanding them to non-adjacent states.

Images of the future technology – The Jetsons, 1962

Fiscal Policy during high unemployment periods: still a bad idea?

Are Government Spending Multipliers Greater During Periods of Slack? Evidence from 20th Century Historical Data

Michael T. Owyang, Valerie A. Ramey, Sarah Zubairy

Abstract

A key question that has arisen during recent debates is whether government spending multipliers are larger during times when resources are idle. This paper seeks to shed light on this question by analyzing new quarterly historical data covering multiple large wars and depressions in the U.S. and Canada. Using an extension of Ramey’s (2011) military news series and Jordà’s (2005) method for estimating impulse responses, we find no evidence that multipliers are greater during periods of high unemployment in the U.S. In every case, the estimated multipliers are below unity. We do find some evidence of higher multipliers during periods of slack in Canada, with some multipliers above unity.

URL: http://econpapers.repec.org/paper/fipfedlwp/2013-004.htm

Review by Sebastian Fleitas

For a very long time the size of the expenditure multipliers has been one of the most vivid economic debates. For instance as recently as 2009, when the Obama administration proposed a fiscal stimulus package, there was a heated discussion regarding the relative size of the expenditure and tax multipliers. The reason fuelling this narrative is perhaps clear: ascertaining the potential impact of a particular proposed measure is key when designing the fiscal policy.

The paper by Owyang, Ramey and Zubairy, which was distributed by NEP-HIS on 2013-02-08 tries to answer this question: Are government spending multipliers greater during periods of slack for the US and Canada when we look at the historical data? The argument behind it is to consider that the expenditure multipliers will be greater in times of crisis, that is, during periods without full employment of labor and capital in the economy. This argument follows the idea that to wake up animal instincts, you need to have something in the forest when guys go out to hunt.

Image in Barro's comment on expenditure multipliers debate on 2009 in Stanford blog.

Image in Barro’s comment on expenditure multipliers debate in 2009 in Hoover Institution Stanford University’s blog (http://www.hoover.org/publications/hoover-digest/article/5401)

The answer that the authors offer is counterintuitive, which makes the paper very interesting. They find that the expenditure multipliers were higher in periods with high unemployment in Canada but they were the same for both periods in the US. To arrive to this conclusion the authors first construct high frequency (quarterly) historical data for the US and Canada. The procedure they follow to build the database is documented in an online available annex of the paper (here). After this process they have data on GPD, GDP deflator, government spending and the unemployment rate for the period 1890q1 to 2010q4 for the US and from 1921q1 to 2011q4 for Canada. The other key variable is the “news” variable, which reflects the changes in expected present value of government spending in response to military events as in Ramey (2011), which in turns directs to Ramey (2009).

PARA BLOG MULTIPLIERS keynes

Regarding the econometric approach, the authors use Jorda’s (2005) local projection technique to calculate impulse responses. The idea in Jorda (2005) is that, in contrast to VAR approaches which  linearly approximate the data generating process to produce optimal one period forecasts, when we are looking at impulse response analysis we should care about the estimation of longer horizons. In this context, it is a better approach to estimate the impulse responses consistently by a sequence of projections of the endogenous variables shifted forward in time onto their lags using ordinary least squares (OLS) with standard errors addressing heterogeneity and serial correlation. The authors estimate a set of OLS regressions of different number of leads of the log of per capita government expenditure and GDP, over their lags and the variable news for periods with high and low unemployment and a quadratic trend. The coefficient for the variable “news” is the impulse response at that certain number of lags.

Finally, the paper made me think of three comments. First of all, the paper shows a very interesting and creative way to proceed when the data needed for the study is actually not available for that historical period. Besides combining sources of information, the authors constructed quarterly series of the variables. Since the paper was prepared for the American Economic Review Paper and Proceedings, it is a very short paper and the procedure to construct the variables is explained not in the paper but in the Annex. Given the lack of data, assumptions about the data generating process must be made. However, and besides the obvious limitation of space, the reader could miss an explanation about the assumptions that are made in the methods used and, also, what implications these assumptions have for the results, in particular about what is the source of variation that allows the identification of the coefficients. Maybe a section in the paper or in the appendix discussing these issues can shed light about what are the potential problems of different assumptions.

The last two comments are related to what is exactly the interpretation of the results. The first one directly follows from the last sentence of the paper. The authors state that they do not adjust for the fact that taxes often rise at the same time as government spending, which turns these multipliers not equal to pure deficit financed multipliers. However, it seems plausible that the effect of the multiplier on the GPD depends on whether this increase in the government was financed by taxes or by debt. If that is the case, and if the episodes when the former and the latter happen are mixed in a non-random way between the periods of high and low unemployment, then it is possible that the value of the coefficients can reflect not only the effect of the exogenous shock but also the effect of different ways to finance it.

A joke?

A joke?

The last comment relates to the consistent estimation of the parameters of the model. In the paper the “news” about military expenditure is taken as the only source of exogenous shock in this economy during the period of two years, four years and the time of the peak of each response. This “news” variable reflects exogenous innovations to the expenditure from a military source. However, it would be relevant for the paper to discuss the existence of other (non-military) sources of exogenous shocks to the expenditure. The relevance of this issue is because, given that the estimation of the parameters of interest is done by OLS, the consistency of the estimates requires zero covariance between the ¨news” and the error term of the equation, and this assumption can be violated if there exist this kind of non-military shocks and they are correlated to military “news”.

Overall I think this is a very interesting paper because of the results they find and also because of the construction of historical data. I found the results very puzzling and therefore a big motivation to continue trying to understand the relationship between GDP and public expenditure.

Sunbeam gets toasted

Accounting fraud, business failure and creative auditing: A micro-analysis of the strange case of Sunbeam Corp.

Marisa Agostini (marisa.agostini@unive.it) and Giovanni Favero (gfavero@unive.it)
(Both at Department of Management, Università Ca’ Foscari Venezia, Italy)

Abstract
This paper puts under the magnifying glass the path to failure of Sunbeam Corp. and emphasizes the reasons of its singularity and exceptionality. This corporate case emerges as an outlier from the analysis of the US fraud cases mentioned by WebBRD: the consideration of the time between fraud disclosure and the final bankruptcy reveals the presence of an exceptional sampled case. In fact, the maximum value of this temporal variable is estimated equal to 840 days: it is really far from the range estimated by the survival function for the entire sample and it refers to Sunbeam Corp. Different hypotheses are evaluated in the paper, starting from the consideration of Sunbeam’s history peculiarities: fraud duration, scapegoating and creative auditing represent the three main points of analysis. Starting from a micro-analysis of this case that the SEC investigated in depth and this work describes in detail, inputs for future research are then provided about more general problems concerning auditing and accounting fraud.

URL http://econpapers.repec.org/paper/vnmwpdman/25.htm

Review by Masayoshi Noguchi

This paper was distributed by NEP-HIS on 30 September 2012. It was also distributed by other NEP reports, namely Accounting (nep-acc), Heterodox Microeconomics (nep-hme) and Informal & Underground Economics (nep-iue).

Agostini and Favero use the case study method to raise questions and considerations concerning the accounting of fraud. Their analytical focus is the company now named Sunbeam Products Inc. It was established in 1897 as the Chicago Flexible Shaft Company by John K. Stewart and Thomas Clark. Its first ‘Sunbeam’ branded household appliance, the Princess Electric Iron, was launched in 1910 and following the success of this line of products the company officially change its name to ‘Sunbeam’ in 1946.

Wikipedia informs us that ‘in 1996, Albert J. Dunlap was recruited to be CEO and Chairman of what was then called Sunbeam-Oster. In 1997, Sunbeam reported massive increases in sales for its various backyard and kitchen items. Dunlap purchased controlling interest in Coleman and Signature Brands (acquiring Mr. Coffee and First Alert) during this time. Stock soared to $52 a share. However, industry insiders were suspicious. The sudden surge in demand for barbecues did not hold up under scrutiny. An internal investigation revealed that Sunbeam was in severe crisis, and that Dunlap had encouraged violations of accepted accounting rules. Dunlap was fired, and under a new CEO, Jerry W. Levin, the company filed for Chapter 11 bankruptcy protection in 2001. In 2002, Sunbeam emerged from bankruptcy as American Household, Inc. (AHI), a privately held company. Its former household products division became the subsidiary Sunbeam Products, Inc. Then AHI was purchased in September 2004 by the Jarden Corporation, of which it is now a subsidiary.’

Al ‘Chainsaw’ Dunlap

Agostini and Favero look at this situation in detail while aiming to show ‘how the specific fraudulent strategy of performance overstatement adopted in the Sunbeam case can be connected to the peculiar modality of its disclosure, allowing to scapegoat the CEO, to (temporarily) discharge the board and the company of any responsibility, and to pursue a business recovery’ (p. 4).

By examining what they consider an exceptional case, Agostini and Favero aim to avoid over simplification and ‘not to sacrifice knowledge of individual elements to wider generalization’, but to be coupled with the informed use of ‘all forms of abstraction since minimal facts and individual cases can serve to reveal more general phenomena’ (p.4). The reason for examining this single outlier case is that, in their view, ‘“deviant cases” follow a peculiar path-dependent logic where early contingent events set cases on an historical trajectory of change that diverges from theoretical expectations’ (p. 2). By so doing, Agostini and Favero aim to ‘enlighten causal mechanisms which are too complex to emerge from standard empirical studies based on statistical approaches’ (p. 4).

The case documents the very aggressive management strategies of Dunlap. As mentioned, these led to fraudulent financial reporting through the misstatement of significant amounts in the financial accounts. In other words, Dunlap was found to have manipulated accounting numbers in numerous ways, skilfully covering these up through the acquisitions of new subsidiaries. Measures were also taken to assure the survival of the company after revelations of the fraud emerged. But in spite of scapegoating, rather tyranic management and the extremely long duration of the fraud the company final reached bankruptcy.

Normally auditors are integral (either by action or omission) to the process leading to accounting fraud (see for instance my work with Bernardo on the auditing of building societies here). But the case of Sunbeam was exceptional in the sense that its auditor, Arthur Andersen, avoided being involved in the crisis (but shortly after were intimate involved in the infamous Enron case). Agostini and Favero point out that ‘[t]his represents another item of exceptionality in Sunbeam Corp. case where there is a shift from the auditors to the CEO of the scapegoat function’ (p. 9). They further add that it was indeed the ‘auditors’ peculiar behaviour that which led to Dunlap being ‘the scapegoat’ (p. 9).

From the late 1940’s to 1997, the upscale toaster market was dominated by the ‘Radiant Control Toaster’ from Sunbeam.

To explore the point above the authors propose the concept of ‘creative auditing’ in comparison with the counterpart of ‘creative accounting’ or ‘earnings management’. According to Agostini and Favero, ‘auditors (agents) may use their professional knowledge, the asymmetrical information and the flexibility inside auditing rules to distract the principals’ attention (owners, shareholders, investors, etc.) from news which will not be welcome’ (p. 14). Agostini and Favero argue that ‘auditors working with management of the company are privy to essential information that can be used in a legal, but not proper way, to maximize their own interests at the expense of the principal’ (p. 14) by citing that ‘Prior to scandal, many assumed that either legal liability or reputational concerns would prevent the large audit firms from engaging in collusion with their clients. Enron and the many frauds that followed have undermined these assumptions’ (p. 14) from Brown (2007, p. 178)

In spite of having effectively discovered the accounting fraud at Sunbeam, the partner in charge of Arthur Andersen, Phillip E. Harlow, signed clean audit report on the ground that ‘the part, which was not presented fairly, was not material, so it did not matter’(p. 22). Agostini and Favero further claim:

After Sunbeam fraud disclosure, Mr. Harlow was supported by its partners at Arthur Andersen, which stated that this case involved not fraud, but “professional disagreements about the application of sophisticated accounting standards.” As emphasized by The New York Times (May 18, 2001), “in the typical accounting fraud case, the auditors say they were fooled. Here, at least according to the S.E.C., the auditors discovered a substantial part of what the commission calls sham profits”. Moreover, stating the immateriality of a part of improper profits, they used their professional knowledge, the asymmetrical information and the flexibility inside auditing rules to distract other stakeholders’ attention from news which will not be welcome.

However, the above indication only refers to the technical nature of the accounting fraud committed and the professional judgment exercised for the degree of materiality. In order to consider the case of Sunbeam as an incident of creative auditing (as Agostini and Favero claim it is), elucidations as to the supposed for Arthur Andersen participating in the fraudulent scheme are insufficient. An improvement on this point would be desirable. Although one can fully agree with their view that the role of auditors for the financial reporting of business enterprises should be reexamined. This paper is thought provoking in this sense.

Must we question corporate rule?

Financialization of the U.S. corporation: what has been lost, and how it can be regained

William Lazonick (University of Massachusetts-Lowell)

Abstract
The employment problems that the United States now faces are largely structural. The structural problem is not, however, as many economists have argued, a labor-market mismatch between the skills that prospective employers want and the skills that potential workers have. Rather the employment problem is rooted in changes in the ways that U.S. corporations employ workers as a result of “rationalization”, “marketization”, and “globalization”. From the early 1980s rationalization, characterized by plant closings, eliminated the jobs of unionized blue-collar workers. From the early 1990s marketization, characterized by the end of a career with one company as an employment norm, placed the job security of middle-aged and older white-collar workers in jeopardy. From the early 2000s globalization, characterized by the movement of employment offshore, left all members of the U.S. labor force, even those with advanced educational credentials and substantial work experience, vulnerable to displacement. Nevertheless, the disappearance of these existing middle-class jobs does not explain why, in a world of technological change, U.S. business corporations have failed to use their substantial profits to invest in new rounds of innovation that can create enough new high value-added jobs to replace those that have been lost. I attribute that organizational failure to the financialization of the U.S. corporation. The most obvious manifestation of financialization is the phenomenon of the stock buyback, with which major U.S. corporations seek to manipulate the market prices of their own shares. For the decade 2001-2010 the companies in the S&P 500 Index expended about $3 trillion on stock repurchases. The prime motivation for stock buybacks is the stock-based pay of the corporate executives who make these allocation decisions. The justification for stock buybacks is the erroneous ideology, inherited from the conventional theory of the market economy, that, for superior economic performance, companies should be run to “maximize shareholder value”. In this essay I summarize the damage that this ideology is doing to the U.S. economy, and I lay out a policy agenda for restoring equitable and stable economic growth.

URL http://econpapers.repec.org/paper/pramprapa/42307.htm.

Review by Bernardo Bátiz-Lazo

As I have noted before (see Bátiz-Lazo and Reese, 2010), financialisation has been coined to encompass greater involvement of countries, business and people with financial markets and in particular increasing levels of debt (i.e. leverage). For instance, Manning (2000) has used the term to describe micro-phenomena such as the growth of personal leverage amongst US consumers.

In their path breaking study, Froud et al. (2006) use the term to describe how large, non-financial, multinational organisations come to rely on financial services rather than their core business for sustained profitability. They document a pattern of accumulation in which profit making occurs increasingly through financial channels rather than through trade and commodity production.

Instead, in the preface to his edited book, Epstein (2005) notes the use of the term as the ascendancy of “shareholder value” as a mode of corporate governance; or the growing dominance of capital market financial systems over bank-based financial systems.

Alternative view is offered by American writer and commentator Kevin Phillips, who coined a sociological and political interpretation of financialisation as “a process whereby financial services, broadly construed, take over the dominant economic, cultural, and political role in a national economy.” (Phillips 2006, 268). The rather narrow point I am making here and which I fail to elaborate for space concerns, is that ascertaining the essential nature of financialisation is highly contested and is in need of attention.

Sidestepping conceptual issues (and indeed ignoring a large number of contributors to the area), in this paper William Lazonick adopts a view of financialization cum corporate governance and offers broad-base arguments (many based on his own previous research) to explore a relatively recent phenomenon: the demise of the middle class in the US in the late 20th century. In this sense, the abstract is spot on and the paper “does what it says on the can”. Yet purist would consider this too recent to be history. Indeed, the paper was distributed by nep-hme (heterodox microeconomics) on 2012-11-11 rather than NEP-HIS. This out of neglect rather than design but goes on to show that the keywords and abstract were initially not on my radar.

William Lazonick

Others may find easy to poke the broad-stroke arguments that support Lazonick’s argument. Yet the article was honoured with the 2010 Henrietta Larson Article Award for the best paper in the Business History Review and was part of a conference organised by Lazonick at the Ford Foundation in New York City on December 6-7, 2012 (see program at the Financial Institutions for Innovation and Development website).

Lazonick points to the erotion of middle class jobs in a period of rapid technological change. This at a time when others question whether the rate of innovation can continue (see for instance The great innovation debate). Lazonick implicitly considers our age as the most innovative ever. But his argument is that the way in which the latest wave of innovation was financed is at the hear of the accompanying ever-growing economic inequality.

So for all its short comings, Lazonick offers a though provoking paper. One that challenges business historians to link with discussions elsewhere and in particular corporate governance, political economy and the sociology of finance. It can, potentially, launch a more critical stream of literature in business history.

References

Bátiz-Lazo, B. and Reese, C. (2010) ‘Is the future of the ATM past?’ in Alexandros-Andreas Kyrtsis (ed.) Financial Markets and Organizational Technologies: System Architectures, Practices and Risks in the Era of Deregulation, Basignstoke: Palgrave-Macmillan, pp. 137-65.

Epstein, G. A. (2005). Financialization and The World Economy. Cheltenham, Edward Elgar Publishing.

Froud, J., S. Johal, A. Leaver and K. Williams (2006). Financialization and Strategy: Narrative and Numbers. London, Routledge.

Manning, R. D. (2000). Credit Card Nation. New York, Basic Books.

Phillips, K. (2006). American Theocracy: The Peril and Politics of Radical Religion, Oil, and Borrowed Money in the 21st Century. London, Penguin.

Money for Nothing? Banking Failure and Public Funds in Michigan in the early 1930s

The Effects of Reconstruction Finance Corporation Assistance of Michigan’s Bank’s Survival in the 1930s

Charles W. Calomiris (cc374@columbia.edu), Joseph R. Mason (joseph.r.mason@gmail.com ), Marc Weidenmier (marc_weidenmier@claremontmckenna.edu), Katherine Bobroff (klbobroff@gmail.com)

URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18427&r=his

Abstract

This paper examines the effects of the Reconstruction Finance Corporation’s (RFC) loan and preferred stock programs on bank failure rates in Michigan during the period 1932-1934, which includes the important Michigan banking crisis of early 1933 and its aftermath. Using a new database on Michigan banks, we employ probit and survival duration analysis to examine the effectiveness of the RFC’s loan program (the policy tool employed before March 1933) and the RFC’s preferred stock purchases (the policy tool employed after March 1933) on bank failure rates. Our estimates treat the receipt of RFC assistance as an endogenous variable. We are able to identify apparently valid and powerful instruments (predictors of RFC assistance that are not directly related to failure risk) for analyzing the effects of RFC assistance on bank survival. We find that the loan program had no statistically significant effect on the failure rates of banks during the crisis; point estimates are sometimes positive, sometimes negative, and never estimated precisely. This finding is consistent with the view that the effectiveness of debt assistance was undermined by some combination of increasing the indebtedness of financial institutions and subordinating bank depositors. We find that RFC’s purchases of preferred stock – which did not increase indebtedness or subordinate depositors – increased the chances that a bank would survive the financial crisis. We also perform a parallel analysis of the effects of RFC preferred stock assistance on the loan supply of surviving banks. We find that RFC assistance not only contributed to loan supply by reducing failure risk; conditional on bank survival, RFC assistance is associated with significantly higher lending by recipient banks from 1931 to 1935.

Review by Sebastian Fleitas

The systemic risk of bank failures, and its macroeconomic consequences, led the Fed to take action when some banks started to fail in 2008. How much money did the Fed give to the banks in 2008? And even more important, was this money helpful to avoid banking failures? The latter question seems to be a key question every time that the government is implementing a program to try to stem bank failures and to reduce the economic cost of financial disintermediation.

Detroit skyline, circa 1930

The paper by Calomiris, Mason, Weidenmier and Bobroff, distributed by NEP-HIS on October 6th,2012, assess the success of a public support program aimed at banks in financial distress. This through assistance provided by the Reconstruction Finance Corporation (RFC), ,a government-sponsored enterprise, to Michigan’s banks in the  1930’s.

Calomiris and friends offer a very interesting description of the timing of the crisis and a regression analysis of the impact of the RFC assistance. The period of analysis, from January 1932 through December 1934, covers two sub-periods: the first in which bank failures occurred sporadically; and a second sub-period in which the failures were concentrated and coincided with regional and national panics.

The banking crisis of 1933 in Michigan is situated in the middle of the period of analysis. This is a very important episode as it can be seen as a prelude to the national banking disaster as well as the Michigan hosting the automobile industry, an industry on the raise and of future importance for the national economy.

The role of the RFC changed between the two sub-periods. During the first period, the RFC main action was to help banks advance money on loan. The risk involved in these loans was mitigated through their short duration, strict collateralization rules and high interest rates. Although these rules protected the RFC from losses, they also limited the effectiveness of the RFC lending policy. However, on March 9, 1933 the Congress passed an act altering the original mandate, allowing the RFC to purchase preferred stock in some financial institutions that were considered as likely to survive. This opened the possibility for the RFC assistance to be more effective in the second sub-period than in the first one.

1940 Reconstruction Finance Corporation RFC Cartoon

Econometric estimates then try to identify the effect of RFC assistance. Specifically whether in light of an increasing rate of bank failures, the federal government had decided offer support to banks with greater risk of failure. In this sense, the dummy variable of RFC assistance is an endogenous variable, and this problem has to be addressed in order to consistently estimate the effect. To deal with this problem, the authors use two different estimation techniques and they use two sets of instruments. First, they use a set of instruments that indicate the correspondent relationships of each bank, that indicate the extent to which the bank was important within the national network of banking and also the correspondent relationships with Chicago and New York. Second, they included county specific characteristics that might have affected RFC assistance without affecting bank failure risk.

The authors conclude that the loans from the RFC did mitigate the risk of bank failure  but rather, that recapitalization (in the form of the purchase of preferred stocks) increased the likelihood of bank survival. Reasons why preferred stocks assistance was more effective included: a) because unlike loans, it neither increases the debt of the bank nor the liquidity risk or collateral requirements, b) the RFC was selective when choosing who was included in the program, and c) the RFC was able to prevent abuse from assisted banks. In general, they conclude that these results suggest that during a banking crisis, effective assistance requires that the government takes a significant part of the risk of the bank failure.

January 1931, Chester Garde

Emprical estimates in this paper concur with previous results in the literature. But by incorporating Michigan this papers offered added granularity and also improves in the use of econometric techniques used to address the issue of the effect of the RFC in banks failure. However, I think the paper could be improved by a more thorough discussion of the instruments used, in terms of why they can be assumed to be related to the RFC assistance and not directly related to bank failure. This is especially important because the results of the first stage estimations cast some doubt about the suitability of some of the instruments selected. Regarding the first set of instruments, one variable indicates the connections of a bank within the national network of banking and another one the relationships with Chicago and New York. However, in the first stage the effect of these two variables over the RFC assistant have different signs and their statistical significance depend on the period and specification of the model. A second concern is that they use the variable “Net due to banks over total assets” but this instrument is not significant in any first stage estimate. Banks with more creditors or debtors could be more important to save, but it could also be the case that these banks are more indebted with other banks because they are facing problems and thus they have more risk of failure. Regarding the second set of instruments, these variables generally fail to be consistently significant and the mechanisms through which they affect the decisions of the RFC without affecting the hazard of failure are not completely clear. Was the main proportion of the business of the banks concentrated at the county level at those times? Does the political importance of the county matter to allocate the assistance, even when the authors say that the manipulation of the RFC by Congress or the Administration was mitigated? Is the unemployment rate in the county in 1930 unrelated with the risk of failure of the banks during the crisis? A more deep consideration of these issues could help to understand why these variables are good instruments and why the results of the first stage estimations look like they do.

To sum up, this paper provides new evidence about the role of the RFC during the important period of 1932-1934. Furthermore, this paper addresses an issue that is relevant today: the efficiency of public funds to avoid bank failures. The general conclusion the authors achieve is that an effective assistance involves that the government assumes a significant share of the risk of bank failure. As in the thirties, in the present the government has spent lots of money trying to avoid the systemic risks related with the failures of some banks. This and other related papers in the literature can help us to understand the effects of a banking crisis in the real sector and the efficiency of public policies that try to reduce its negative impacts. This particular historical experience can not only shed light about what happened in that opportunity but also give us insights to approach these situations when they appear again, in particular to design better economic policies.