Monthly Archives: January 2009

Economic Teaching at the Universities

Daily Article by | Posted on 1/30/2009

From Planning for Freedom. Originally published in The Freeman, April 7, 1952.

A few years ago, a House of Representatives Subcommittee on Publicity and Propaganda in the Executive Departments, under the chairmanship of Representative Forest A. Harness, investigated federal propaganda operations. On one occasion the committee had as a witness a government-employed doctor. When asked if his public speeches throughout the country presented both sides of the discussion touching compulsory national health insurance, this witness answered, “I don’t know what you mean by both sides.”

This naive answer throws light on the state of mind of people who proudly call themselves progressive intellectuals. They simply do not imagine that any argument could be advanced against the various schemes they are suggesting. As they see it, everybody, without asking questions, must support every project aiming at more and more government control of all aspects of the citizen’s life and conduct. They never try to refute the objections raised against their doctrines. They prefer, as Mrs. Eleanor Roosevelt recently did in her column, to call dishonest those with whom they do not agree.

Many eminent citizens hold educational institutions responsible for the spread of this bigotry. They sharply criticize the way in which economics, philosophy, sociology, history, and political science are taught at most American universities and colleges. They blame many teachers for indoctrinating their students with the ideas of all-around planning, socialism, and communism. Some of those attacked try to deny any responsibility. Others, realizing the futility of this mode of defense, cry out about “persecution” and infringement of “academic freedom.”

Yet what is unsatisfactory with present-day academic conditions — not only in this country but in most foreign nations — is not the fact that many teachers are blindly committed to Veblenian, Marxian, and Keynesian fallacies, and try to convince their students that no tenable objections can be raised against what they call progressive policies; the mischief is rather to be seen in the fact that the statements of these teachers are not challenged by any criticism in the academic sphere. The pseudoliberals monopolize the teaching jobs at many universities. Only men who agree with them are appointed as teachers and instructors of the social sciences, and only textbooks supporting their ideas are used. The essential question is not how to get rid of inept teachers and poor textbooks. It is how to give the students an opportunity to hear something about the ideas of economists rejecting the tenets of the interventionists, inflationists, socialists, and communists.

1. Methods of the “Progressive” Teachers

Let us illustrate the matter by reviewing a recently published book. A professor of Harvard University edits, with the support of an advisory committee whose members are all, like himself, professors of economics at Harvard University, a series of textbooks, the “Economics Handbook Series.” In this series there was published a volume on socialism. Its author, Paul M. Sweezy, opens his preface with the declaration that the book “is written from the standpoint of a Socialist.” The editor of the series, Professor Seymour E. Harris, in his introduction, goes a step further in stating that the author’s “viewpoint is nearer that of the group which determines Soviet policy than the one which now [1949] holds the reins of government in Britain.” This is a mild description of the fact that the volume is from the first to the last page an uncritical eulogy of the Soviet system.

Now it is perfectly legitimate for Dr. Sweezy to write such a book and for professors to edit and to publish it. The United States is a free country — one of the few free countries left in the world — and the Constitution and its amendments grant to everybody the right to think as he likes and to have published in print what he thinks. Sweezy has, in fact, unwittingly rendered a great service to the discerning public. For his volume clearly shows to every judicious reader conversant with economics that the most eminent advocates of socialism are at their wits’ end, do not know how to advance any plausible argument in favor of their creed, and are utterly at a loss to refute any of the serious objections raised against it.

But the book is not designed for perspicacious scholars well acquainted with the social sciences. It is, as the editors’ introduction emphasizes, written for the general reader in order to popularize ideas and especially also for use in the classroom. Laymen and students who know nothing or very little about the problems involved will draw all their knowledge about socialism from it. They lack the familiarity with theories and facts which would enable them to form an independent opinion about the various doctrines expounded by the author. They will accept all his theses and descriptions as incontestable science and wisdom. How could they be so presumptuous as to doubt the reliability of a book, written, as the introduction says, by an “authority” in the field and sponsored by a committee of professors of venerable Harvard!

The shortcoming of the committee is not to be seen in the fact that they have published such a book, but in the fact that their series contains only this book about socialism. If they had, together with Dr. Sweezy’s book, published another volume critically analyzing communist ideas and the achievements of socialist governments, nobody could blame them for disseminating communism. Decency should have impelled them to give the critics of socialism and communism the same chance to represent their views to the students of universities and colleges as they gave to Dr. Sweezy.

On every page of Dr. Sweezy’s book, one finds really amazing statements. Thus, in dealing with the problem of civil rights under a socialist regime, he simply equates the Soviet constitution with the American constitution. Both, he declares, are

generally accepted as the statement of the ideals which ought to guide the actions of both the state and the individual citizen. That these ideals are not always lived up to — either in the Soviet Union or in the United States — is certainly both true and important; but it does not mean that they do not exist or that they can be ignored, still less that they can be transformed into their opposite.

Leaving aside most of what could be advanced to explode this reasoning, there is need to realize that the American constitution is not merely an ideal but the valid law of the country. To prevent it from becoming a dead letter there is an independent judiciary culminating in the Supreme Court. Without such a guardian of law and legality, any law can be and is ignored and transformed into its opposite. Did Dr. Sweezy never become aware of this nuance? Does he really believe that the millions languishing in Soviet prisons and labor camps can invoke habeas corpus?

To say it again, Dr. Sweezy has the right — precisely because the American Bill of Rights is not merely an ideal, but an enforced law — to transform every fact into its opposite. But professors who hand out such praise of the Soviets to their students without informing them about the opinions of the opponents of socialism must not raise the cry of witch-hunt if they are criticized.

Professor Harris, in his introduction, contends that “those who fear undue influence of the present volume may be cheered by a forthcoming companion volume on capitalism in this series written by one as devoted to private enterprise as Dr. Sweezy is to socialism.” This volume, written by Professor David McCord Wright of the University of Virginia, has been published in the meantime. It deals incidentally also with socialism and tries to explode some minor socialist fallacies, such as the doctrine of the withering away of the state, a doctrine which even the most fanatical Soviet authors relegate today to an insignificant position. But it certainly cannot be considered a satisfactory substitute, or a substitute at all, for a thoroughly critical examination of the whole body of socialist and communist ideas, and the lamentable failure of all socialist experiments.

Some of the teachers try to refute the accusations of ideological intolerance leveled against their universities and to demonstrate their own impartiality by occasionally inviting a dissenting outsider to address their students. This is mere eyewash. One hour of sound economics against several years of indoctrination of errors! The present writer may quote from a letter in which he declined such an invitation:

What makes it impossible for me to present the operation of the market economy in a short lecture — whether fifty minutes or twice fifty minutes — is the fact that people, influenced by the prevailing ideas on economic problems, are full of erroneous opinions concerning this system. They are convinced that economic depressions, mass unemployment, monopoly, aggressive imperialism and wars, and the poverty of the greater part of mankind, are caused by the unhampered operation of the capitalist mode of production.

If a lecturer does not dispel each of these dogmas, the impression left with the audience is unsatisfactory. Now, exploding any one of them requires much more time than that assigned to me in your program. The hearers will think: “He did not refer at all to this” or “He made only a few casual remarks about that.” My lecture would rather confirm them in their misunderstanding of the system…. If it were possible to expound the operation of capitalism in one or two short addresses, it would be a waste of time to keep the students of economics for several years at the universities. It would be difficult to explain why voluminous textbooks have to be written about this subject…. It is these reasons that impel me reluctantly to decline your kind invitation.

2. The Alleged Impartiality of the Universities

The pseudoprogressive teachers excuse their policy of barring all those whom they smear as old-fashioned reactionaries from access to teaching positions by calling these men biased.

The reference to bias is quite out of place if the accuser is not in a position to demonstrate clearly in what the deficiency of the smeared author’s doctrine consists. The only thing that matters is whether a doctrine is sound or unsound. This is to be established by facts and deductive reasoning. If no tenable arguments can be advanced to invalidate a theory, it does not in the least detract from its correctness if the author is called names. If, on the other hand, the falsity of a doctrine has already been clearly demonstrated by an irrefutable chain of reasoning, there is no need to call its author biased.

A biographer may try to explain the manifestly exploded errors of the person whose life he is writing about by tracing them back to bias. But such psychological interpretation is immaterial in discussions concerning the correctness or falsity of a theory. Professors who call those with whom they disagree biased merely confess their inability to discover any fault in their adversaries’ theories.

Many “progressive” professors have for some time served in one of the various alphabetical government agencies. The tasks entrusted to them in the bureaus were, as a rule, ancillary only. They compiled statistics and wrote memoranda which their superiors, either politicians or former managers of corporations, filed without reading. The professors did not instill a scientific spirit into the bureaus. But the bureaus gave them the mentality of authoritarianism. They distrust the populace and consider the State (with a capital S) as the God-sent guardian of the wretched underlings. Only the government is impartial and unbiased. Whoever opposes any expansion of governmental powers is, by this token, unmasked as an enemy of the commonweal. It is manifest that he “hates” the state.

Now if an economist is opposed to the socialization of industries, he does not “hate” the state. He simply declares that the commonwealth is better served by private ownership of the means of production than by public ownership. Nobody could pretend that experience with nationalized enterprises contradicts this opinion.

Another typically bureaucratic prejudice which the professors acquired in Washington is to call the attitudes of those opposing government controls and the establishment of new offices “negativism.” In the light of this terminology all that has been achieved by the American individual enterprise system is only “negative”; the bureaus alone are “positive.”

There is, furthermore, the spurious antithesis “plan or no plan.” Only totalitarian government planning that reduces the citizens to mere pawns in the designs of the bureaucracy is called planning. The plans of the individual citizens are simply “no plans.” What semantics!

3. How Modern History Is Taught

The progressive intellectual looks upon capitalism as the most ghastly of all evils. Mankind, he contends, lived rather happily in the good old days. But then, as a British historian said, the Industrial Revolution “fell like a war or a plague” on the peoples. The “bourgeoisie” converted plenty into scarcity. A few tycoons enjoy all luxuries. But, as Marx himself observed, the worker “sinks deeper and deeper” because the bourgeoisie “is incompetent to assure an existence to its slave within his slavery.”

Still worse are the intellectual and moral effects of the capitalist mode of production. There is but one means, the progressive believes, to free mankind from the misery and degradation produced by laissez-faire and rugged individualism, viz., to adopt central planning, the system with which the Russians are successfully experimenting. It is true that the results obtained by the Soviets are not yet fully satisfactory. But these shortcomings were caused only by the peculiar conditions of Russia. The West will avoid the pitfalls of the Russians and will realize the welfare state without the merely accidental features that disfigured it in Russia and in Hitler’s Germany.

Such is the philosophy taught at most present-day schools and propagated by novels and plays. It is this doctrine that guides the actions of almost all contemporary governments. The American “progressive” feels ashamed of what he calls the social backwardness of his country. He considers it a duty of the United States to subsidize foreign socialist governments lavishly in order to enable them to go on with their ruinous socialist ventures. In his eyes, the real enemy of the American people is big business, that is, the enterprises which provide the American common man with the highest standard of living ever reached in history. He hails every step forward on the road toward all-around control of business as progress. He smears all those who hint at the pernicious effects of waste, deficit spending, and capital decumulation as reactionaries, economic royalists, and Fascists. He never mentions the new or improved products which business almost every year makes accessible to the masses. But he goes into raptures about the rather questionable achievements of the Tennessee Valley Authority, the deficit of which is made good out of taxes collected from big business.

The most infatuated expositors of this ideology are to be found in the university departments of history, political science, sociology, and literature. The professors of these departments enjoy the advantage, in referring to economic issues, that they are talking about a subject with which they are not familiar at all. This is especially flagrant in the case of historians. The way in which the history of the last 200 years has been treated is really a scandal. Only recently, eminent scholars have begun to unmask the crude fallacies of Lujo Brentano, the Webbs, the Hammonds, Tawney, Arnold Toynbee, Elie Halevy, the Beards, and other authors. At the last meeting of the Mont Pelerin Society, the occupant of the chair of economic history at the London School of Economics, Professor T.S. Ashton, presented a paper in which he pointed out that the commonly accepted views of the economic developments of the 19th century “are not informed by any glimmering of economic sense.” The historians tortured the facts when they concocted the legend that “the dominant form of organization under industrial capitalism, the factory, arose out of the demands, not of ordinary people, but of the rich and the rulers.”

The truth is that the characteristic feature of capitalism was and is mass production for the needs of the masses. Whenever the factory, with its methods of mass production by means of power-driven machines, invaded a new branch of production, it started with cheap goods for the broad masses. The factories turned to the production of more refined and therefore more expensive merchandise only at a later stage, when the unprecedented improvement which they had caused in the masses’ standard of living made it reasonable to apply the methods of mass production to better articles as well. Big business caters to the needs of the many; it depends exclusively upon mass consumption. In his capacity as consumer, the common man is the sovereign whose buying or abstention from buying decides the fate of entrepreneurial activities. The “proletarian” is the much-talked-about customer who is always right.

The most popular method of deprecating capitalism is to make it responsible for every condition which is considered unsatisfactory. Tuberculosis and, until a few years ago, syphilis, were called diseases of capitalism. The destitution of scores of millions in countries like India, which did not adopt capitalism, is blamed on capitalism. It is a sad fact that people become debilitated in old age and finally die. But this happens not only to salesmen but also to employers, and it was no less tragic in the precapitalistic ages than it is under capitalism. Prostitution, dipsomania, and drug addiction are all called capitalist vices.

Whenever people discuss the alleged misdeeds of the capitalists, a learned professor or a sophisticated artist refers to the high income of movie stars, boxers, and wrestlers. But who contribute more to these incomes, the millionaires or the “proletarians”?

It must be admitted that the worst excesses in this propaganda are not committed by professors of economics but by the teachers of the other social sciences, by journalists, writers, and sometimes even by ministers. But the source from which all the slogans of this hectic fanaticism spring is the teachings handed down by the “institutionalist” school of economic policies. All these dogmas and fallacies can be ultimately traced back to allegedly economic doctrines.

4. The Proscription of Sound Economics

The Marxians, Keynesians, Veblenians, and other “progressives” know very well that their doctrines cannot stand any critical analysis. They are fully aware of the fact that one representative of sound economics in their department would nullify all their teachings. This is why they are so anxious to bar every “orthodox” from access to the strongholds of their “un-orthodoxy.”

The worst consequence of this proscription of sound economics is the fact that gifted young graduates shun the career of an academic economist. They do not want to be boycotted by universities, book reviewers, and publishing firms. They prefer to go into business or the practice of law, where their talents will be fairly appreciated. It is mainly compromisers, who are not eager to find out the shortcomings of the official doctrine, who aspire to the teaching positions. There are few competent men left to take the place of the eminent scholars who die or reach the retirement age. Among the rising generation of instructors are hardly any worthy successors of such economists as Frank A. Fetter and Edwin W. Kemmerer of Princeton, Irving Fisher of Yale, and Benjamin M. Anderson of California.

There is but one way to remedy this situation. True economists must be given the same opportunity in our faculties which only the advocates of socialism and interventionism enjoy today. This is surely not too much to ask as long as this country has not yet gone totalitarian.



Stimulus package passes

January 29, 2009 WORLD

ECONOMY: The House passed an $819 billion economic stimulus package with all Republicans voting no, despite President Obama’s courtship | Emily Belz

WASHINGTON—The U.S. House of Representatives passed a massive economic stimulus package Wednesday, 244 to 188, at a cost of $819 billion, while the U.S. Senate hammered out their own version totaling $900 billion. All 177 House Republicans voted no.

House Republicans balked from supporting a bill they said had too much wasteful spending and too little input from the minority, while House Speaker Nancy Pelosi refused to allow any delays for Republican changes to the legislation, arguing that the economy’s wounds must be immediately staunched. Addressing accusations that she was ramrodding the legislation through, Pelosi said, “This legislation is long overdue.”

“The underlying bill, while it has some good provisions, has a lot of wasteful spending,” responded House Republican Leader John Boehner.

The bill contains tax cuts, a small portion of infrastructure spending, but mostly major government spending on things like climate-change research.

Several conservative Democrats defected in the vote as well, arguing that the spending would not stimulate the economy, but instead would raise the national deficit. Fiscal conservative Democrats who did vote for the bill considered it a necessary expenditure to jump-start the country out of a recession. The Obama administration sent a letter Tuesday addressing the concerns of these Blue Dog Democrats, promising to reinstate “pay-as-you-go” rules, which would prohibit spending that increases the deficit.

Obama spent his time over the last several days trying but failing to woo Republicans into supporting the bill, hoping for the new Congress’ first piece of legislation to have bipartisan support—but he faced GOP legislators alienated by the maneuvering of the House Democratic leadership.

“It’s one thing to seek constructive input, but that clearly has not happened, judging by the legislation that has been written,” said Rep. Jerry Lewis, R-Calif., the ranking Republican on the House Appropriations Committee, which was largely responsible for the legislation. “This bill was largely written by two people . . . the speaker and my chairman [Rep. David Obey, D-Wis.]. That is a travesty, a mockery, a sham.”

Conservatives counted one victory in the bill: Democrats removed $200 million in funding for family planning services. Pelosi told ABC’s George Stephanopoulos on Sunday that the family planning spending would stimulate the economy because contraceptive services “reduce cost” for state governments. On Monday, President Obama called Democrat Henry Waxman, chair of the committee that inserted the provision in the House bill, and told him to strip it out. The next day, the Democrats removed the funding—a move that angered those on the left.

Cecile Richards, head of Planned Parenthood, has been issuing statements since Obama’s election, lauding the nation’s new leader as a defender of women’s rights. But Wednesday she wrote, “I’m stunned,” and called the move to remove family planning funding “a betrayal of millions of low-income women.”

The Senate’s version of the stimulus plan includes more tax cuts and provisions sponsored by Republicans—the president has said he will lobby to include Republican ideas in the final version of the bill.



Risky Business: Keynes, Moral Hazard, and the Economic Crisis

by Samuel Gregg

January 13, 2009   
If governments do not take moral hazard seriously, their response to the present recession may sow the seeds of a future economic crisis.


As the world’s financial markets continue to limp along under the burden of insufficient liquidity and amid ongoing doubts about many financial institutions’ basic solvency, governments are focusing on how to jumpstart their economies out of recession. In some quarters, tax-cuts have been mentioned as part of a possible range of options. Far more governments, however, are opting for the type of interventionist policies traditionally associated with the economist John Maynard Keynes, whose writings in the 1930s revolutionized the way that most economists understood the very nature of economic science. Though many—if not most—of his ideas were discredited by the stagflation that crippled Western economies in the 1970s, Lord Keynes seems to be back in fashion today.

A little discussed question, however, is whether Keynesian-inspired policies are more likely in the long-run to actually foster one of the major causes of the current financial crisis. Among other things, Keynes is famous for his remark that “in the long-run, we are all dead.” To be fair to Keynes, this comment from his Tract on Monetary Reform (1923) is invariably cited out of its original context. But there is no escaping the fact that Keynesian policies ignore a major factor underlying our present economic problems: moral hazard.

Moral hazard is a term commonly used to describe those situations when a person or institution is effectively insulated from the possible negative consequences of their choices. This makes them more likely to take risks that they would not otherwise take, most notably with assets and capital entrusted to them by others. The higher the extent of the guarantee, the greater is the risk of moral hazard.

The mortgage lenders Fannie Mae and Freddie Mac are prominent examples of this problem. Implicit to their lending policies was the assumption that, as government-sponsored enterprises with lower capital requirements than private institutions, they could always look to the Federal government for assistance if an unusually high number of their clients defaulted. In a 2007 Wall Street Journal article, the Nobel Prize-winning economist Vernon Smith noted that both Fannie Mae and Freddie Mac were always understood as “implicitly taxpayer-backed agencies.” Hence they continued what are now recognized as their politically-driven lending policies until both suffered the ignominy of being placed in Federal conservatorship last September.

Consider, too, the legal protection of limited liability. On one level, this arguably encourages managers and investors to take full advantage of the massive wealth-creating potential often associated with high-risk endeavors. But at the same time, limited liability also tends to shelter these individuals from a major incentive not to take excessive risk: the prospect of personal bankruptcy. As observed in 1971 by another Nobel Laureate in economics, Kenneth Arrow, limited liability regulation creates incentives for people to do things that they might not do if they were subject to the provisions of unlimited liability. “The law,” Arrow wrote, “steps in and forces a risk shifting not created in the market-place.”

At the level of government policy, a prominent instance of moral hazard was what some call the “Greenspan doctrine” of 2002. This involved the U.S. Federal Reserve stating that, while it was powerless to prevent the emergence of asset bubbles (such as the dot-com and housing booms), the Federal Reserve would do everything that it could to soften the effects of an imploding bubble. This included providing investors with the option of selling their depreciated assets to the Federal Reserve at a time of crisis. Not surprisingly, the result was a surge in excessive risk-taking by investors confident that, if everything did not proceed as planned, they could recoup their losses at someone else’s expense. In his recent book, Fixing Global Finance (2008), the financial journalist Martin Wolf underlines “the distortions introduced by government guarantees to risk-taking.” These, he writes, “create an overwhelming incentive to privatize gains and socialize losses.”

In many respects, the “Big Three” American car companies are (barely) living examples of what sometimes happens to specific industries when the danger of moral hazard is underplayed. When the Carter Administration chose to rescue Chrysler in 1980, this action conveyed a message to the three Detroit-based car manufacturers: they could take the risk of producing cars that fewer and fewer consumers apparently wanted to buy; they could also risk refraining from confronting the serious inefficiencies introduced into their companies’ operations by years of automobile executives acquiescing in outlandish demands from the United Automobile Workers union. Why? Because if the car companies subsequently found themselves facing economic Armageddon, they had high expectations—based on an established policy-precedent—that the Federal government would bail them out.

Thus, no one should have been surprised to find the chief executive officers of the now not-so-Big Three—accompanied by a legion of lobbyists and the ever-present UAW—appearing before the United States Congress in late 2008 requesting assistance in order to avert bankruptcy. Does anyone doubt that the next time the Big Three skirt the edge of insolvency, they will once again request protection from the consequences of bad decisions? Such is the logic of disdaining moral hazard.

While there is a great deal of literature on the economics of moral hazard, the same material contains curiously little reflection on why the adjective “moral” is attached to the word “hazard.” Indeed, when economists started studying the subject of moral hazard in the 1960s, their analysis rarely included an explicitly ethical dimension. For the most part, this remains true today. So why do we not simply describe these situations as instances of “risk hazard”?

It may be that the word “moral” reflects some innate, albeit largely unexpressed, awareness that there is something ethically questionable about creating situations in which people are severely tempted to make imprudent choices. To employ a loose analogy from the realm of moral theology, the one who creates “an occasion of sin” bears some indirect responsibility for the choices of the person tempted by this situation to do something very imprudent or just plain wrong.

If governments and businesses took moral hazard seriously, they would make an effort to identify those state and non-state structures, policies, and practices that create incentives for people to take excessive risks with their own and other peoples’ assets. They would then do what they could to minimize these instances of moral hazard. The economic price might be fewer booms. But economic growth over time would likely be steadier. The chances of mild or severe recessions would also be reduced.

This brings us to back to the Keynesian policies that most governments are adopting to address the current crisis. In a 2007 Financial Times column, a prominent member of the former Clinton and now Obama Administration’s economic team, Larry Summers, argued that we should beware of what he called “moral hazard fundamentalism.” This was, he said, “as dangerous as moral hazard itself.” By this, Professor Summers meant that ruling out significant government economic intervention on the grounds that it might encourage moral hazard would itself be irresponsible.

The problem is that the Keynesian-interventionist outlook involves, by necessity, a degree of systematic denial of the reality of moral hazard. In an attempt to maintain full employment in perpetuity, Keynesian policies embrace measures ranging from keeping interest-rates artificially low, partially nationalizing industries, to engineering large public works programs. An unfortunate effect is that many businesses as well as ordinary consumers become somewhat insulated from many of the negative consequences of poor decisions and bad investments. As a result, some will become complacent, which is the road to economic stagnation. Others, however, are likely to take risks that become increasingly irresponsible over time until we find ourselves in situations similar to our current predicaments.

Of course, as long as human beings are fallible creatures, many will take excessive risks at different points in their lives. For some people, it will be with their marriage. Others will behave in an excessively risky manner with their own and others’ financial resources. As a consequence, some people will suffer losses. In a society where right reason and the ethic of loving one’s neighbor reigns, individuals and communities should be ready to help those in genuine need. Law also has a potentially important role to play. As the legal philosopher John Finnis observes in Natural Law and Natural Rights (1980), a sound bankruptcy law can meet all the demands of justice—legal, commutative, and distributive—while respecting the dignity of all those effected, especially the dispossessed, but also those at fault. But neither we nor governments do anyone any favors by creating circumstances or incentives that encourage people to behave imprudently and recklessly in the worlds of finance and industry.

In his many works, the German economist Wilhelm Röpke noted that we should never forget the economic implications of one of the famous pensées of the seventeenth-century French mathematician, philosopher, and physicist Blaise Pascal: “L’homme n’est ni ange ni bête, et le malheur veut que qui veut faire l’ange fait la bête” [Man is neither an angel nor a brute, and the misfortune is that he who wants to make the angel makes the brute]. Röpke’s point was that basing economic policies on the pretence that humans are angels is likely to encourage some rather un-angelic behavior.

This is advice that governments should keep in mind if they do not want their response to the present recession to sow the seeds of a future economic crisis. Taking moral hazard seriously would be a welcome first step.

Samuel Gregg is Research Director at the Acton Institute. He has authored several books including On Ordered Liberty and his prize-winning The Commercial Society.



Hamilton’s Counterfeit Capitalism

Daily Article by |

As we await Bush’s replacement to straighten our wayward lives, it’s crucial to understand how we got here and why policy makers are so determined to do the wrong thing. Austrian economics explains why their policies are flawed, but no one with a voice seems to care. When history confirms that hands-off is the only effective and humane approach to a bust, and to prosperity generally, while hands-on brings ruination, why do governments today consider every option but free markets?

You could blame it on the heavy influence of Keynesianism, but we could ask why Keynes is so popular. He got away with blaming the market for the Depression of the 1930s. How can his followers do the same today after 70 more years of intense interventionism? To read today’s mainstream commentaries, you would think the free market slipped in the back door when no one was looking.

We know governments have always meddled in their economies, but the United States was supposed to be appreciably different. Did we begin with unhampered markets, witness their failure, then switch to a more “progressive” approach? At what point in our history did we begin promoting interventionism as an ideal?

Review the country’s founding, and it isn’t immediately obvious where the state’s heavy hand first made its mark. Nowhere in the Declaration, for example, do we find a footnote calling for high taxes and a central bank to support our inalienable rights. It’s hard to imagine that the patriots who fought at Breed’s Hill or Yorktown were inspired by visions of a massive redistribution of their wealth to special interests. But when we consider the Constitution’s “general welfare” clause, we start to wonder. Was it colonial shorthand for anything goes, provided sufficient political support?

Thomas Jefferson said no; Congress did not have unlimited powers to provide for the general welfare, “but were restrained to those specifically enumerated.” His political rival Alexander Hamilton, on the other hand, had two answers. As the author of Federalist #84, in which he referred to constitutions “as limitations of the power of government itself,” he might agree with Jefferson, at least publicly. But later, as Treasury secretary under Washington, he dropped the façade of government restraint. As long as any proposed legislation was “in the public good,” he considered it lawful under the Constitution.

As Thomas J. DiLorenzo tells us in his engaging new book, Hamilton’s Curse: How Jefferson’s Arch Enemy Betrayed the American Revolution — and What It Means for Americans Today,

Hamilton dismissed Jefferson’s strict constructionism and viewed the Constitution as a grant of powers rather than as a set of limitations. With clever manipulation of words, he believed, the Constitution could be used to approve virtually all government actions without involving the citizens at all.

In a recent article, DiLorenzo says that Hamilton “fought fiercely for his program of corporate welfare, protectionist tariffs, public debt, pervasive taxation, and a central bank run by politicians and their appointees out of the nation’s capital.”

Regarding the stipulation that policies must promote “the public good” or serve “the public interest” — phrases that Hamilton used countless times — DiLorenzo reminds us that “no government policy can be said to be in ‘the public interest’ unless it benefits every member of the public.” And how often does that happen? The “public interest” turns out to mean favored special interests.

A Revolutionary War hero and aid to General Washington, Hamilton began pushing for “a government of more power” in 1780; and in 1787, with the help of a gross distortion of Shays’s Rebellion, he brought state delegates together for the Constitutional Convention, the proceedings of which were closed to the public. According to an 1823 book by John Taylor of Caroline, which relied heavily on notes taken by Convention delegate Robert Yates, Hamilton moved quickly to consolidate all power in the hands of the executive branch, proposing a permanent president and senate.

Governors of the states would be appointed by the national government, and any state law that conflicted with the federal constitution would be considered void. What Hamilton wanted was a “great” national government much like the one from which Americans had recently seceded. Not surprisingly, the convention attendees rejected his proposal, establishing instead a confederation of free and independent states that delegated a few specific powers to the central government.

In 1802, Hamilton privately denounced the Constitution as “a frail and worthless fabric,” but by then he had already established the methodology for rendering it irrelevant, as DiLorenzo puts it, through the “lawyerly manipulation of its words.”

Hamilton’s Agenda

In his 1791 Report on Manufactures, he urged Congress to authorize the payment of “pecuniary bounties” (subsidies) to the manufacturers of certain items, on the basis of the general-welfare clause. The clause was “doubtless” intended to mean more than what it expressed, Hamilton argued, so it was up to Congress to decide what it meant and how to fund it. As DiLorenzo points out, generations of nationalist judges have used Hamilton’s argument to expand the government far beyond its constitutional limits.

In addition, the nation, not the states, had “full power of sovereignty,” Hamilton insisted. The states were “artificial beings” and thus it would make no sense to talk of their right of secession — though somehow those same artificial states had united to secede from England. Furthermore, Hamilton argued, the Constitution grants the government “implied powers,” one of which was to establish a national bank to promote a “paper circulation” and thereby extend loans in excess of its reserves of gold and silver. Hamilton said the Constitution’s commerce clause gave government the power to regulate all commerce, not just interstate commerce. A national bank, which would regulate commerce within states, was thereby authorized.

As DiLorenzo explains, Hamilton and his nationalist compatriots couldn’t make mercantilism work with a confederation of sovereign states. If northern states passed a high protectionist tariff, for example, imports would flood into the low-tariff southern states, then spread to the rest of the country. With a nationalist government, high tariffs could be imposed on all states, with some states effectively being taxed for the benefit of other states.

A Standing Army of Tax Collectors

Hamilton interpreted the Constitution’s “war powers” to mean “that unlimited resources should be given to the military, including conscription and a standing army in peacetime,” DiLorenzo writes. “He also wanted government to nationalize all industries related to the military, which in today’s world would mean virtually all industries.”

A standing army in times of peace was necessary to enforce government taxation. And what better way to make this point than to do a little enforcing? Thus, in 1794, Hamilton personally accompanied President Washington to western Pennsylvania with 13,000 conscripts and officers from the creditor aristocracy of the eastern seaboard to crush the so-called Whiskey Rebellion. After rounding up a score of tax rebels, some of whom were old and veterans of the Revolutionary War, Hamilton drove them through the snow in chains all the way to Philadelphia, where he ordered local judges to issue guilty verdicts and sentence them to be hanged. Washington, who had returned home before the cross-state slog, pardoned the only two who were eventually convicted, leaving Hamilton bitterly disappointed.

Other areas of the American frontier — in Maryland, Virginia, North and South Carolina, Georgia, and the entire state of Kentucky — engaged in home whiskey production and fiercely opposed the new tax. Whiskey was not only a beloved consumable, it served as money, as a medium of exchange, and locals considered the tax as onerous as the king’s Stamp Tax of 1765. There was no rebellion in these areas because no one was willing to collect the taxes. Hamilton had picked the four counties in western Pennsylvania as his target because local officials were corrupt enough to help him.

The tax and the federal assault on the protestors put the spotlight on Hamilton’s “public interest” tactic. As Rothbard noted, “in keeping with Hamilton’s program, the tax bore more heavily on the smaller distilleries. As a result, many large distilleries supported the tax as a means of crippling their smaller and more numerous competitors.” The smaller distilleries were taxed by the gallon, while the larger ones paid a flat fee.

The hated tax also helped get Jefferson elected in 1800. The election resulted in a tie between Jefferson and Aaron Burr, and was thus thrown into the House. Selecting who he considered the lesser of two evils, Hamilton used his influence to break the tie in favor of Jefferson, a deed that helped bring about his fatal duel with Burr in 1804.

But before Jefferson took office, DiLorenzo explains, Federalist President John Adams helped Hamilton’s cause when he appointed hundreds of “midnight judges” to the federal judiciary in the last 19 days of his administration. Though Jefferson got rid of most of them, he overlooked the appointment of Hamilton idolater John Marshall, who served as chief justice from 1801–1835.

“In Marbury v. Madison [1803] John Marshall essentially asserted that he, as chief justice, had power over all congressional legislation,” DiLorenzo writes. This was consistent with Federalist #78, where Hamilton said it belongs to the courts “to ascertain [the Constitution's] meaning as well as the meaning of any particular act proceeding from the legislative body.” Though Marbury v. Madison marks the birth of judicial review, the Hamiltonian idea that the government should be the sole judge of its own actions didn’t prevail until it was imposed by force of arms — during the War between the States.

Hamilton’s Disciples

Following Hamilton’s death, Kentucky senator Henry Clay, a wealthy slaveholder known as the “prince of hemp” for his huge hemp crops, joined Marshall and others in promoting statism and corporate privilege. As DiLorenzo tells us, Clay “spent decades, literally, advocating protectionist tariffs on foreign hemp; government-subsidized roads and canals, so that he could transport his hemp eastward; a nationalized bank that could inflate the economy.” Clay wanted to force complete self-sufficiency on the country and deprive Americans of the benefits of the international division of labor — a good deal for Kentucky hemp growers, but not for consumers.

Far from bringing about the harmonious relations Clay promised, his mercantilist agenda provoked sectional strife. The tariffs he championed “overwhelmingly favored northern states,” inasmuch as there was little manufacturing in the South even by the 1860s. “To southerners, tariffs were all cost and no benefit.” Protectionist tariffs, an essential part of Hamilton’s scheme for a mercantilist America, would be a prime mover of the forces for war.

When Lincoln became president, he moved quickly to implement Hamilton’s system of corporate welfare. Not even his bloody war deterred him. He and his majority Republicans imposed tariff rates of 50 percent, authorized enormous subsidies to railroad corporations, and created a nationalized banking system. Greenbacks issued under the new system depreciated by more than half, and consumer prices in the North more than doubled between 1860 and 1865. Because of the inflation, real wages plummeted, and the war ended up costing northern taxpayers $528 million more, DiLorenzo says.

The Credit Mobilier scandal of 1882 was the most notorious consequence of Hamiltonian corporate welfare, but, as DiLorenzo notes, “it was only the tip of the iceberg” of the predictable waste and corruption that results from government favors. The public was outraged over the scandal and called for more political control of business — they called, in other words, for more of what created the problem in the first place.

As Gabriel Kolko showed in his 1963 ground-breaking work, The Triumph of Conservatism, “American businesses, far from resisting political control, sought such regulation because they could use it to their advantage,” DiLorenzo explains. The railroad industry, for example, lobbied for creation of the Interstate Commerce Commission, which soon outlawed discounts to customers. Cornelius Vanderbilt had been engaging in this “ruthless” practice, but “[b]y making discounts illegal, the ICC relieved railroad companies from the pressure to compete for customers.” Other businesses such as gas and electric utilities turned to the political arena for grants of monopoly — seeking to obtain from government what they failed to achieve on the market.

The Hamiltonian Revolution of 1913

In 1913, government acquired effective control of the country’s wealth and strengthened its rule over the states by passing three laws: the income tax, the direct election of senators, and the federal reserve act. The first two arrived as the Sixteenth and Seventeenth Amendments; the “currency bill” was slipped in just before Christmas. All three, per Hamilton’s rhetoric, were promoted under cover of “the public interest.” All three were cons — abuses of confidence by public officials. All three “delivered a death blow to the old Jeffersonian tradition in American politics,” and brought about “the final, decisive victory for the Hamiltonians.”

Were these laws really so bad? Judge for yourself.

Prior to the Seventeenth Amendment, US senators were “ambassadors of the states”; they were appointed by state legislatures. They would speak for their state governments, which would presumably have control over how they voted. Having senators appointed was intended as a check on the powers of the federal government. It limited “senators’ ability to sell their votes to special-interest groups nationwide,” DiLorenzo explains. Thanks to the Seventeenth Amendment, political corruption has “expanded by orders of magnitude,” he says. “U.S. senators now travel all around the country seeking special-interest campaign contributions.”

An income tax was not popular in Hamilton’s day, but he recognized the need for high taxes to fund the “energetic” government he wanted. The first federal income tax was imposed in 1862, and though it was abolished a decade later, “the experience had whetted the appetites of special-interest groups,” DiLorenzo writes. By 1913, American farmers had made a deal wherein they would support an income tax in exchange for lower tariff rates. The income tax became law in 1916, and by 1930 tariff rates had soared to their highest level ever — 59.1 percent, on average. So much for the farmers’ deal making.

After the adoption of withholding in 1943, the income tax became entrenched, as Charlotte Twight has written, “both through its administrative apparatus and through its acceptance in the minds of most taxpayers.” With its confiscation of enormous amounts of wealth and the army of bureaucrats and agents needed for collection, the income tax renders states as well as citizens hat-in-hand beggars when trying to influence the federal government. In their relationship to Washington, states have become Hamilton’s “artificial beings.”

Loathing and fearful of competition, big businesses in the late 19th century tried to form voluntary cartels, but such arrangements are notoriously unstable, DiLorenzo points out, so they turned to government to make them work. What the big bankers wanted was a monopoly of the issue of bank notes so they could have a more “elastic currency.” Previously, if an individual bank issued too many notes, depositors would get nervous and demand redemption in gold. Because all banks issued more notes or deposits than they had gold in reserve, they were all one bank run away from being exposed.

The currency act that created the Fed in 1913 was a crucial step in eliminating this problem — for the bankers. Two decades later, the government took gold out of the picture, so that covering a member shortfall was no longer a problem. Through the magic of the printing press, the Fed could also provide instant revenue to the government to pay for military adventures.

The Fed and the income tax provided the “funding mechanisms” for getting the United States into the European slaughterhouse called World War I. “Like all wars, World War I permanently ratcheted up the powers of government and fueled the urge among politicians to ‘plan’ American society in peacetime just as they had planned in war,” DiLorenzo explains.

The Fed has the power to do the one thing it shouldn’t do: regulate the money supply. By doing so it distorts price relations and guarantees a correction, which, since 1929, the government regards as a clarion call to “do something.” Ignoring economic wisdom, it does everything it can to prevent the necessary correction, thereby making the recovery longer and more painful. When the economy pulls out of the depression, government takes the credit, and the Fed begins inflating again, inaugurating another boom-bust-correction/intervention-crisis sequence that will bear heavily on almost everything we hold dear. Between 1789 and 1913, prices remained roughly stable, DiLorenzo notes, and government was little more than a footnote in people’s lives. Since 1913, prices have increased twentyfold, while today government intrusion has no limits.


As with his two books on Lincoln, Thomas J. DiLorenzo has done a masterful job of exposing an American icon whose influence has been highly detrimental to the majority who live outside the rarefied reality of national politics.

Is there any escape from Hamilton’s world? It all depends on us. The book’s last chapter, “Ending the Curse,” calls for a “devolution of power.” We need to shake up the ruling caste and strip the central government of its Hamiltonian features, which means, among other things, ending judicial tyranny, repealing the Sixteenth and Seventeenth amendments, outlawing protectionist tariffs, and abolishing the general-welfare clause. We should recall that the latter two measures were achieved in the Confederate Constitution of 1861 as well as state constitutions in the antebellum period. DiLorenzo also wants to dismantle “government’s Hamiltonian monopoly on money,” which would in itself be a major setback to despotic government.

Hamilton’s Curse is a pleasure to read and a must-read for anyone who values freedom and seeks a deeper understanding of the prevailing nonsense.


Tags: ,

Does “Depression Economics” Change the Rules?

Daily Article by | Posted on 1/12/2009

Wily competitors have known for ages that if you can’t win the game, you can simply change the rules. Now, during normal economic times, if somebody recommended that the government borrow a trillion dollars and spend it on anything that moves, most economists (as well as common sense) would say, “That’s nuts.” So one would think that especially in the middle of a severe recession, in which the American public has to recover from misguided overconsumption (fueled by Fed policies), such massive deficit spending would be all the more ludicrous.

Ah, enter the wily academics. According to our most recent Nobel laureate, Paul Krugman, we are now in a period of “depression economics,” where the standard rules don’t apply. In particular, the argument goes, when there are idle resources lying around, the traditional economic problem of scarcity disappears. The government can prime the pump by throwing borrowed money around, and this can only boost total output, because employed workers produce more than unemployed workers.

In the present article I will pick apart this reasoning and show that the standard rules still apply. It’s wasteful for the government to commandeer resources from the private sector during good times, and it’s even more harmful when the government kicks the economy during a recession.

The Argument From Idle Resources

First let’s make sure we fairly present the argument in favor of massive government “stimulus.” Although Krugman has said equivalent things over the last few months, Mark Thoma actually provides the most succinct statement I have seen of the position. I ask the reader to forgive the following lengthy quotation, but this issue is crucial and we really need to understand the Krugman/Thoma point:

Let me explain through an example why I don’t think these objections [of crowding out and job destruction from higher taxes or borrowing] do not apply to depression economies.

Imagine a town with a widget factory that provides employment for workers in the town. There is full employment so that everyone who wants a job at the going rate of compensation has one, save for the unavoidable frictional unemployment as people voluntarily change occupations, move, etc.

The town also has infrastructure needs, in particular there is a bridge that is essential to commerce that can no longer support the weight of loaded trucks, and this is forcing trucks headed to and from market to take a much longer, much more expensive route.

If the government tries to build a new bridge or fix the old one, and there is full employment, it will be forced to bid those resources away from other uses. There is no labor or other resources sitting around idle waiting for something to do, so if the government wants to employ the labor, raw materials, and equipment to repair the bridge, it will have to bid these resources away from other uses. A crane working on the bridge cannot be building a new factory at the same time, labor to build the bridge must be bid away from the widget factory, and so on. In such a case, we will see substantial crowding out….

It is correct to say that government spending crowds out private investment in this case, and that all government spending can do is change the mix of jobs, it can’t change the number. In the example above labor moved from widgets to bridges, but there was no change in the overall quantity of labor.

But let’s change the situation. Suppose that for some reason … a recession hits and the demand for widgets falls nationally. Because of this, a large number of workers are laid off. They would work at pretty much any wage, and they have looked and looked, but there’s nothing available for them.

In this case, government spending does not crowd out private investment, and it creates jobs, it doesn’t just change the mix. Let’s suppose, to make it easy, that … the number of laid off workers is just the number needed to build a new bridge (if not, then adjust the list of projects and add more or less until there is a match).

When the government steps in and hires workers to build the bridge, it doesn’t take the workers away from other employment. This is a recession, firms aren’t building new factories, new buildings aren’t needed, or not needed to the same degree as at full employment, and there are cranes sitting in the yard waiting for something to do. Resources, like labor are no longer fully employed, and putting them to work does not mean having less of something else. In depression economies — when there are idle resources that are involuntarily unemployed — crowding out is not the problem.…

When we talk about crowding out, we mean that government spending, by using the crane, labor, etc., to build the bridge, displaces private investment. If we believe that private investment is more productive than government investment (which isn’t completely clear for a bridge if the bridge is essential infrastructure), then future growth will be lower because of the lower level of private sector investment.

But in depression economies, things are different. The choice is not between a new bridge and a new factory, the choice is between a bridge and no bridge (you could try to induce the private sector to build a factory through tax incentives or other means, but good luck with that in a depression). [Emphasis added]

After that lengthy quotation, we have a solid grasp of the Krugmanite point: putting unemployed resources to work can only help, since prodding workers into producing even items of dubious value is better than letting them sit around watching Let’s Make a Deal.

Unfortunately, there are several fatal flaws with this perspective, which we now explain.

Government “Smart” Stimulus Can’t Target Only Idle Resources

Even on its own terms, Thoma’s scenario fails because it is unrealistic. It is absurd to think that the government could come up with spending programs that would draw only on unemployed resources. Keynesian “macro” thinking ignores the complex capital structure of an economy. To build a bridge (as in Thoma’s example) requires a lot more than cranes and generic laborers. For example, gasoline will be burned in order to transport the newly employed workers to and from the work site. Nails, screws, steel, lumber, and other resources will be channeled into the new bridge, and at least some of these inputs will be diverted away from other private-sector uses, rather than simply leaving a state of idleness.

Our Enemy, InflationWithin the broad category of “labor” we find a similar situation, once we actually contemplate doing this project for real. If the city of Houston wants to build a new bridge, is it really the case that every last person even remotely involved with the project, will come from the ranks of the unemployed who are within commuting distance of the Houston bridge site? Surely the project will draw on engineers, construction foremen, and other skilled workers, who were still gainfully employed even amidst the recession, and who therefore will not be able to work on as many private-sector projects as they otherwise would have.

What is particularly ironic in this discussion of idle resources is that it is the pro-stimulus Keynesians who ought to be very fastidious in their recommendations for government spending projects. After all, if the whole point is to draw down resources that have been thrown out of work, then care should be taken to tailor the stimulus package for the resources in question. Is it really the case, for example, that bridges and roads require labor and other inputs in the same proportions as housing construction and finance? Does the construction of a new sewer system require the services of investment bankers and roof layers in such combinations that local government spending can perfectly offset the bursting of the housing bubble?

Even though their position would require it, in practice (of course) the Keynesians are not concerned a whit for the specific projects to be funded. To reason in this way misses the point, they say. (Notice that “the point” changes from argument to argument.) Lest the reader accuse me of unfairness, here’s Paul Krugman on the matter:

The key thing, when you’re in a situation like this, is realizing that normal rules don’t apply. Ordinarily we’d welcome an increase in private saving; right now we’re living in a world subject to the “paradox of thrift,” in which private virtue is public vice. Normally we want to be careful that public funds are spent wisely; right now the crucial thing is that they be spent fast. (John Maynard Keynes once suggested burying bottles of cash in coal mines and letting the private sector dig them up — not as a real proposal, but as a way of emphasizing the priority of supporting demand.) [Emphasis added]

Why Are Resources Idle in the First Place?

Although a serious objection, the above considerations really just argue that it would be difficult in practice for Thoma to tailor a stimulus package suiting his specifications. But even if we conceded that the government could spend money in a way that only involved unemployed resources, the measure would nevertheless be harmful and would make the country poorer.

To see why, we need to understand what is causing so many resources to be unemployed in the first place. According to the Austrian theory of the business cycle, the housing and stock market booms were fueled by Alan Greenspan’s decision to slash interest rates in an effort to provide a “soft landing” after the dot-com crash and 9/11 attacks. This artificial stimulus goaded entrepreneurs into starting numerous projects that were unsustainable.

In short, people in the private sector made decisions as if there were far more real resources at their disposal to “fund” the projects to completion. When reality set in, many of the projects had to be abandoned, meaning that the workers and other resources involved had to be laid off. (See this article for Mises’s analogy of the master homebuilder being misled by an erroneous resource inventory, and why workers would be unemployed once he discovers his error.)

Once people in the private sector realized they had made horrible decisions during the boom years, they needed to stop business as usual and figure out how to make the best of a bad situation. Homeowners who had skimped on their savings for years (relying on booming house prices) had to slash spending to compensate for years of overconsumption, while entrepreneurs needed to decide which activities were likely to be profitable going forward, in light of the new information.

What had to happen is that workers and other resources that had been misallocated into housing construction and Wall Street investment banks, needed to be moved into other sectors. To repeat, this was and is a fantastically complex reshuffling, because even something as simple as producing a pencil requires the contributions of thousands of workers all over the world.

It’s not a simple matter of moving unemployed builders and hedge-fund managers into “booming” sectors X, Y, and Z, because (as we’ve seen above) these newly employed workers will require complementary tools and resources that were not laid off to the same extent. So the issue is, what is the best new outlet for all of these laid-off workers, such that — all things considered — the final mix of output goods best satisfies consumer desires? How can we be sure that channeling them into occupation X won’t actually do more harm than good?

In practice, the people in a market economy solve this fantastically complex problem by making profit-and-loss calculations, which in turn rely on market prices. For example, it is clear that a former Wall Street quant isn’t doing anybody a service by cranking out models that give mortgage-backed securities a gold star for safety. But what should this PhD do now? Should he go into academia and teach thermodynamics (which may very well have been the subject of his dissertation)? Or is his impressive education really a complete waste, and he would — at this point, given the economic realities — provide the most service by working the register at Wal-Mart?

Nobody knows the answer to this question. What happens during the recovery process is that the unemployed whiz kid initially looks for a job paying his former salary. As the months pass, he realizes that this is unrealistic, and he begins lowering his minimum price. Eventually, he finds an employer with compatible desires, and the two agree to a mutually beneficial arrangement.

As my simple story illustrates, the period of “idle unemployment” serves a real function in a market economy. It is true that such periods of massive discoordination are almost always the fault of government interference, but whatever the initial cause, there is no denying that the discoordination is real. Writers such as Krugman and Thoma act as if recessions are caused by massive bouts of irrational consumer anxiety, and that all problems can be patched up by a simple boost of “aggregate demand.”

On the contrary, the economy’s capital structure really was thrown into an unsustainable condition during the boom years, and it takes time for the mess to be sorted out. When the government runs up a deficit to fund “stimulus” projects, all that really means is that it is forcing taxpayers to pay for projects that they wouldn’t buy with their own money. (It is true that a group of private citizens might not have the legal ability to build a new bridge, but that’s not essential to Krugman and Thoma’s argument. Imagine that Thoma had discussed government funding of a new shopping mall.)

To the extent that some of the drop in demand is due to the general “panic” and flight to liquidity, the politicians aren’t helping matters by increasing household indebtedness and throwing money at one-off projects. If a restaurant owner discontinues his expansion because demand has collapsed, how does Thoma’s bridge project change things? The restaurant owner isn’t going to make a long-term investment based on the business of bridge workers, since they will be out of work once the bridge is finished.

Private investors are fleeing to real goods because they are uncertain, and making trillions of dollars subject to political deals, rather than consumer choice, only increases the uncertainty over future conditions. Pro-stimulus economists can keep bringing up new aspects, but each new consideration just proves how counterproductive their proposals are.


It is difficult to think objectively about “idle resources” when they are workers with families to feed. The reader who is still on the fence should first work through the arguments, pro and con, with other resources. In the comments of a recent blog post, Mario Rizzo relates how in class Milton Friedman used the example of dress shirts on the shelves of department stores. Adopting Krugman’s viewpoint, these shirts are “idle” inventory and are clearly being wasted in the sputtering private sector. Clearly the government ought to raise the deficit and spend a few billion dollars buying up these shirts, even if just to use them as rags on construction sites. Some critics might object that this is a “waste” of precious resources, but what good is a shirt on a store shelf?

The above analogy with shirts is not as cute and flippant as it first sounds; the reader should really think through the implications of Friedman’s analogy. Every problem with the tongue-in-cheek suggestion regarding dress shirts is (more or less) applicable to unemployed workers. In particular, government shirt buying would lead to too many new shirts being produced, just as government “green jobs” programs will induce workers to quit other lines and go into solar-panel production.

Although Krugman and Thoma have made the only rhetorical move left to salvage their disastrous recommendations, their claim is wrong: the normal rules of scarcity do still apply, even in the middle of a depression. No matter the scenario, government spending channels resources away from the private sector. Even if the project employs workers who were previously unemployed, this still retards the genuine, private-sector recovery from the slump, because that is one less worker available to be hired by an entrepreneur.

If the government wants the economy to recover as quickly as possible, the solution is simple: cut spending, cut taxes, stop inflating the money supply, and stop changing the rules every three days. But this solution won’t be adopted, since it doesn’t allow the politicians to pose as generous saviors.



Lesson in Economics

5a  by Tommy Davis

I do believe that the Republican Party will make a comeback. With the economy destined to fail under the Democratic leadership, it is our time as conservatives to educate the populace about the value of the free-market.

For those who are unaware, the current stimulus package obsession is leading our economy down the road to the inevitable—depression. Issuing free money in hopes to “stimulate” the financial system will reduce our purchasing power. In other words, newly printed money not backed by production leads to inflation over time. With higher prices, the poor becomes poorer.

Also, when prices rise, at the same time production costs rise. Higher production costs leads to a surplus of employees. This is what we call unemployment.

We Republicans will do well to truly educate the people about the peril of high taxes and stiff regulation of businesses.

An Obama presidency is already taking on the form of a disaster because his policies are based on Keynesian economics. John Maynard Keynes was a promoter of price inflation as a solution to unemployment. He championed the issuance of new money and credit. His philosophy never worked. It is a deception to raise workers wages and at the same time decrease their purchasing power. High prices are a result of inflating the money supply.

When FDR implemented wage controls and other regulation, he prolonged the Great Depression. The Great Depression lasted twelve years when it could have been over in under two.

Adam Smith, in his classic “The Wealth of Nations” understood the power of the free market to weed out unproductive businesses. It is the consumer who determines who survives —not government intervention. Rather than have a surplus of employees, those workers can be shifted in more productive employment at the “request” of consumer demand.

Observing this trend from a historical perspective, we must understand that our current paper we identify as money was historically a receipt or bank note that confirmed that we had a valued commodity in the bank. It was easier to carry paper than bars of gold or other commodities. On real terms, the printing of notes without being backed is fraud. This is where the laws of counterfeiting surfaced.

What we have today is legal counterfeiting that we are paying for through higher prices and unemployment.


Tags: , , , ,

Two New Year’s Resolutions

by Dr. Gary North
Date 1/5/2009 • Issue 42

Every year, from late December through the first week of January, there are lots of articles published on making New Year’s resolutions. Some of these articles recommend New Year’s resolutions. Other articles make fun of them. The authors point out the obvious fact, namely, that almost nobody ever follows through on a New Year’s resolution.

I am in the camp of the skeptics. I do not believe that most people will follow through on any resolution, let alone a New Year’s resolution, unless there are enormous positive benefits to be secured by following through. People think about the benefits, but then the cost of obtaining these benefits becomes obvious in the first week after New Year. The cost of the benefits is immediate, while the benefits themselves are far in the future. The tyranny of the urgent conquers the hope of the future.

We all know this, but we are tempted to make New Year’s resolutions anyway. This is the triumph of hope over experience. The trouble is, people become discouraged early, and this discouragement lasts for as long as the New Year’s resolution had specified. The only way to escape, other than actually getting back on track, is to self-consciously stop thinking about the New Year’s resolution. This tends to produce guilt. People conclude that they are unable to follow through on long-term plans, and so they are tempted not to make any long-term plans.

The problem with New Year’s resolutions is that they are not part of a systematic plan. The resolutions have no plan undergirding them, and they do not fit into an existing long-term plan. The main reason for this is that very few people have made existing long-term plans. They do not write down their goals, nor do they write down a plan to achieve these goals. This is a mistake that almost everyone makes, and it keeps most people from achieving their potential.

The Bible has an entire chapter devoted to the question of vows made to God: Numbers 30. It has rules regarding these vows. These vows are voluntary, and they are not to be entered into lightly. Once made, they are to be fulfilled. This is why it is unwise to make a formal vow unless you’re willing to pay the price of fulfilling it. The obvious one is the marriage vow. There are others.

I do not put a New Year’s resolution in the same category as a vow made to God. But there is a pattern here. We live in a society in which contracts are violated, vows are violated, and New Year’s resolutions barely make it to February.


I do have a recommendation for a New Year’s resolution. In fact, there are two resolutions, and they are interlinked. The first resolution I call “pay God first.” The second resolution is called “pay yourself second.”

“Paying God first” is your tithe. It comes first. Find some way to pay it. If you can set up an automatic withholding plan at work, do this. Have the money deposited in your church’s account. Talk with your payroll person and your church on how to do this.

The fact that churches do not recommend this in new members’ classes indicates that they are in begging mode. Churches should make it easy for people to meet their obligations.

The phrase, “pay yourself first,” is a familiar one inside the personal finance industry. The idea of paying yourself first is based on a view of thrift. This view says that it is imperative that a person take 10%, or some other fixed percentage, of his after-tax income, and invest it. No matter what takes place, no matter what emergency arises, an individual self-consciously takes 10% of his income after taxes, and invests it.

Why do people call this “pay yourself first”? They do this because they understand that an individual owes something to himself. He owes in the present what it will take to sustain him in the future. His present self owes his future self.

Why does he owe this? Because he is personally responsible for his own actions. He will require savings in the future in order to sustain them in his old age, or when an uninsurable disaster strikes. So, he is careful to set aside money every month to invest in a systematic program that will meet his needs in the future. He does this because he does not wish to become a burden to other people. He does not want to become a charity case.

Another aspect of “pay yourself first” is that at some point, compound economic growth is supposed to take over. The resources that have been set aside for future use begin to generate a return on investment. Over time, this return multiplies the value of the investment portfolio. This is called putting your money to work for you. The stream of income feeds back into the total portfolio. This stream of income multiplies over time, and the individual finds that the end of his life, he does not have to work in order to build capital. The capital produces sufficient income to maintain his lifestyle whether he works or not.

This has been the promise of IRA programs and 401(k) programs for two decades. The problem is, because of the nature of the investment markets in a world governed by the central bank policy, inflation, bad investments, and government promises of bailouts have combined to reduce the rate of return on capital. This has made it difficult or even impossible for the vast majority of people who have set up retirement programs that will enable them to retire in comfort on the income generated either retirement portfolios. They have believed government promises about protecting the small investor, and they have therefore lost a great deal of money. Even worse, they have lost a great deal of time. The longer that they remain naive about the nature of economic returns in a world governed by central banking, the less likely they will be in a position to live off of the income generated by their investment portfolios.

What good is it to pay yourself first? If all of your savings will be eroded by monetary inflation and the business cycle, what good is it to save? This depends on what you invest the money in. If you invest in assets that tend to rise with the rate of price inflation, you can stay ahead of the monster. But there is something else that is even more important. It is crucial that an individual take the attitude he is responsible for his own future. If he does this, he will adopt a system of thrift, whether the money goes into conventional markets or whether it goes into the family business. The point is, the individual recognizes that the future does not take care of itself; it requires planning and the proper execution of the plans in order to achieve a major goal, such as retirement.

It is the self-discipline of setting up goals, establishing a plan, and maintaining the plan that is crucial for success in life. It is far less important what an individual invests in an effective the invests on a systematic basis. It is the constant attention to detail, and the constant exercise of self-discipline, that is crucial for long-term success. It is the mental habit of saving on a regular basis, no matter what, that makes the difference in an individual’s productivity. The habit of saving, which rests on the habit of deferred gratification, pays dividends apart from on investment portfolio. This attitude begins to affect all aspects of individuals life or he this is what makes the big difference for him in the long run.

I recommend to people that people adopt this self-discipline early in life. I taught my children to tithe 10% of their income and save 10%. As adults, they are all extremely thrifty, and they all have maintained a savings program. They understand that they are personally responsible for their old age, and they also understand that the Federal government is going bankrupt. They understand that no official is going to intervene on their behalf when they are old and weak, and therefore they do what they can to build assets to protect themselves in the future.

Thrift is more than an systematic program of investing. It is a mental attitude regarding the future. This attitude takes full responsibility. It does not attempt to blame the government for the government’s failure to protect the public from their own lack of self-discipline.


This is why I recommend a two-part New Year’s resolution. First, you promise yourself to pay 10% of your gross income to your local church. You do this because it is not your money. This is a kind of return on investment for God. It is comparable to a licensing fee paid by individual companies that are part of a franchise.

By paying yourself second, you acknowledge that you cannot be sure about the future. You build up resources that can be used at a time of your life when you will not be able to support yourself. Your lifetime plan has to have some consideration of the fact that you will grow old, and you will become less able to earn a living. This faces the fact of ageing: old age does reduce most people’s productivity. Acknowledge this early if you are going to enjoy a decent lifestyle in old age.

In past eras, the vast majority of people lived their lives on the assumption that their children would take care of them in some way. Welfare state economics has undermined this ancient assumption. This is temporary. When Medicare and Social Security go the way of all flesh, the older tradition will again manifest itself in the American public.

We live in a temporary period of time in which people have naively believed that they will be able to save enough money, and also gain a large enough after-inflation, after-tax rate of return to gain sufficient wealth to provide a comfortable retirement. That illusion has been challenged over the last 14 months. It will be challenged again. The stock market is not going to deliver the goods.

I recommend to people that they systematically save 10% of their income every month, or every paycheck, as a kind of liturgy. It is a liturgy acknowledging the stages of life. It is a reminder that the clock is ticking. As a self-defense measure against the ticking of the clock, an individual sets up a savings program to which she contributes on a regular basis.

I think the best way for most people do this is to set up an automatic savings-withdrawal program through their job. It does not matter much what the money goes into initially. It can go into a bank account. The important thing is that the individual not get used to spending this money, and therefore not become dependent on it. He transfers it automatically to a thrift program. He adjusts his expenditures to work around his income, which is then reduced by whatever he has set up in his savings-withdrawal program. This is a form of resolution that has teeth. The reason why it has teeth is that most people cannot feel the bite. The money comes out of paycheck, but people do not become dependent on it, nor do they really miss it.

The United States government caught onto this when it imposed withholding taxes in 1943 as a wartime measure. Tax revenues quadrupled within a year. I suggested this is one of the few lessons that the government has ever provided that can seriously benefit the public. The principle underlying the success of the withholding program is this: “Out of sight, out of mind.” It is the opposite of this: “Absence makes the heart grow fonder.”

If you want to be successful with a New Year’s resolution, make sure that the New Year’s resolution has teeth. The best possible teeth are teeth that take their bite quietly and painlessly.

The first few months of a savings-withdrawal program will be extremely painful. Adjusting to a lower income is difficult for almost all people. This is why it is so important that people begin early in life with the savings withholding program. They must not get used to the income their job generates. The transition is always painful. This is why people postpone making it. But if the pain is not experienced early in life, it will be experienced later in life. It is better to experience an early in life, when you have the strength, resiliency, and youth to adjust. You don’t want to be forced into adjustment when you’re 80 years old.

If you are serious about this particular New Year’s resolution, on Monday morning, you will contact whoever is in charge of payroll at your company. Ask that person to set up your account so that the computer begins extracting money on a regular basis. This will be put into a savings program. This may be a tax-deferred program such as an IRA, or it may be some other form of savings. The important thing is that it is automatic. Your goal here is to reduce your dependence upon your income in the present for the sake of the future.


The best plan is an automatic plan. You don’t have to think about it. You set it up once and live with it.

A combined 20% reduction of income is too big for most families’ budgets. There are two ways to handle this. The best way is to get a second job until the 20% limit is met. The second-best way is to cut expenses, while adding one percentage point per month to the tithe. Only when the tithe is met does the savings-withdrawal program get activated. Pay yourself second.

You will have to decide where the money goes after it arrives in your bank account. This will force you to pay attention to the economy. But this should be a separate issue. The crucial issue is setting up the automatic withdrawal plan.


Tags: ,


Get every new post delivered to your Inbox.