T. Carlos Anderson on America’s Religion of Work

Is America a “Christian nation”? And is social inequality the “necessary” price for the uninhibited pursuit of wealth?

This is an excerpt (Chapter 5) from “Just A Little Bit More: The Culture of Excess and The Fate of the Common Good” by author T. Carlos Anderson, and published by Blue Ocotillo Publishing, Austin, Texas, which has granted this website permission to reprint the chapter here. This article was previously reprinted by Diana Swancutt on the Boston Poverty Consortium website, with permission from Blue Ocotillo Publishing.

static.squarespace
Just A Little Bit More, by T. Carlos Anderson

In Just A Little Bit More: The Culture of Excess and the Fate of the Common Good, T. Carlos Anderson explores these questions. Defining “religion” as “ultimate concern,” Anderson argues that the true religion of the United States is the confluence of commerce, materialism, and consumption, and that the country’s true devotion is the pursuit of material wealth at the expense of the common good. Anderson carefully examines three eras of excess in U.S. American history–the Gilded Age, the 1920s, and the current age that began in the late 1970s and helped bring about the economic swoon of 2007-08–to argue that democracy and egalitarianism, as America’s two greatest achievements, are the substance of the common good, exist only when advocated for, and all three are suffering near death under the weight of economic inequity. What follows is chapter five, “America’s True Religion: Commerce, Materialism, and Consumerism,” in a book of eight very smart chapters. Chapter five narrates both the secularization of distinctively Christian religious impulses to support “America’s True Religion” and details the rise of the economic neoliberalism that is so thoroughly impacting capitalism today. The last, “Economic Democracy,” argues for a new egalitarian economic approach to the common good that avoids both the regression and idolatry of inequality in favor of sustainability, coexistence, compassion and cooperation.

T. Carlos “Tim” Anderson is a bilingual Protestant minister in Austin, Texas who has previously lived and worked in Chicago, Houston, and Lima, Peru. For copies of Just A Little Bit More, interview requests, and other inquiries, contact T. Carlos “Tim” Anderson at the Blue Ocotillo Publishing website, www.blueocotillo.com.

 

America’s True Religion: Commerce, Materialism, and Consumption

Dawn Hughey routinely put in seventy-hour workweeks as a retail store manager, making about $35,000 a year as a salaried employee. Abel Lopez had the same type of job, worked the same long hours, and brought home an equivalent amount of pay. Hughey managed a Dollar General store in Detroit and Lopez a Family Dollar store in El Paso; they both performed the same tasks as did the sales associates under their supervision: unloading trucks, stocking shelves, cleaning toilets, running cash registers, doing inventory, moving boxes. Since Hughey and Lopez were categorized as managers, they were exempted from receiving overtime pay. When one does the math, both made a little less than ten dollars an hour. Until they were fired, that is.

The 1938 Fair Labor and Standards Act put in place the forty-hour workweek, mandated a minimum wage, established overtime pay at a rate of time and a half, and further regimented child labor law. This federal statute—still enforced today by the Department of Labor—also exempts “executive” and “administrative” salaried employees from receiving overtime pay, as long as they make more than $455 a week. The statute specifies that executive and administrative employees are to manage the work environment, to direct other workers under their supervision, and not to engage in manual labor. This last provision serves a double purpose: to distinguish managerial work from manual labor and to ensure that manual laborers are not taken advantage of. In the growing economy of era the mid-twentieth century, the line of demarcation between managers and laborers was clear. In today’s stagnant service economy, workers like Hughey and Lopez perform the tasks of traditional management while also doing anything else needed, because they don’t have the budget to staff more workers. Putting in more hours—many more—is sometimes the only difference between supervisors and their subordinates.

“All these dollar stores, their company structure is the same. Their largest controllable expense is their labor budget,” says Lance Gould, an Alabama attorney whose firm represents frustrated managers like Hughey and Lopez. “It’s corporate theft.” Dollar General, Family Dollar, and Dollar Tree jointly have more than 25,000 stores and employ more than 220,000 people, mostly part-timers. All three companies, publically traded, have seen their stock prices soar since the economic collapse of 2007–08. Traditionally the destination of the poor, the dollar stores with their rock-bottom prices now also attract middle-class shoppers. The dollar stores have filled the tight niche between Walmart and weekend garage sales. Razor-thin margins on merchandise, lower rents because the stores inhabit less desirous locales, and a thin yet replaceable labor force squeezed for time combine to make up the basement of American capitalism, also known as the dollar store economy.

After two strenuous years as store manager, Hughey was fired after having been injured on the job. She was told her termination was due to productivity problems previous to her injury. She sued and eventually reached a settlement with Dollar General. Lopez was fired after having worked seven years for Family Dollar—he was told his store lacked proper upkeep. Lopez now leads an El Paso labor organization that advocates a fifty-two hour maximum workweek for store managers.[1]

Americans have always worked hard. A grand generalization to say so, yes; its validity, however, is not discredited by confined cases of dependents and slackers who have bucked the societal norm. The norm was cast early on by Pilgrim and Puritan journeyers who crossed the Atlantic as indentured servants, working off the price of their passage, typically in five years. In 1648 the Puritans of Massachusetts made legislation proclaiming idleness to be a punishable crime.[2] European immigrants busting their backs, shoulders, and fingers in factories, foundries, and meatpacking plants in New York City, Philadelphia, Chicago and other urban areas reinforced that same ethos. Their living conditions were sometimes as deplorable as their working conditions. The norm was also established by homesteaders in the Upper Plains bearing their first Dakota winters without a potbellied stove and accessible firewood. Having broken and plowed the virgin prairie the previous fall—no easy task—they hoped to survive the winter in order to sow the spring grain that would flourish in the summer, thereby enhancing their odds for survival the following winter. Colonist families in Texas, seeking river bottomlands for planting and ranching, sparred with Comanche, Tonkawa, and other Indian tribes.[3] Snakes, scorpions, summer sun, and drought also plagued the first Texans. African slaves labored in cotton fields in Texas and other southern territories, sacrificing much more than simply wages in the process. Like the slaves, other “unofficial” Americans—illegal immigrants,[4] or undocumented workers (depending on one’s point of view), and seasonal workers and their families—followed the harvests, all working hard for minimal pay. The wages they accepted, of course; part of foundational American egalitarianism says that anyone can work their way up. Kings, queens, and other silver-spoon types are not the only ones who can know privilege. The land of opportunity is also the land of production and consumption; traditionally, work has been the means to achieve privilege and advantage. People have come to America—legally and otherwise—for more than three centuries for the express purpose of working and making a living, or simply making money.

Today American workers labor more hours than workers in most other industrialized nations. South Korean workers lead the way in total hours worked per year; American workers have less vacation and fewer holidays than all other workers in the Western world. As for average number of weeks worked per year, only Australian workers at 47.6 bested the American figure of 45.9, the two highest figures among industrialized Western nations. This was not always the case; American ascendency in hours worked, resulting in a decline in leisure time, began in the 1970s.[5] It marked the end of a trend of decline of hours worked. As the industrial era progressed, the hours in the typical workweek declined from roughly seventy in 1850, to sixty in 1900, to fifty in 1920, to forty by the middle part of the last century. At the end of the post–World War II economic surge, many commentators, confident of increased gains in productivity and innovation, foresaw a shorter workweek as the United States advanced toward the last few decades of the century. Those predictions fell completely flat. Americans worked, on the average, 160 hours more a year in 2000 than in 1970. Contributing to the increase in hours worked were the increasing numbers of women entering the work force. As of 2009, for the first time ever recorded, more women than men were on American payrolls—further confirmation that as a culture, Americans have chosen money and possessions over time.[6]

America, historically associated with the opportunity to work, now seems to be associated with the domination of work. Consumption, of course, has brought about work’s ability to rule.

 

Early Work

Anthropologist Marshall Sahlins described our ancient hunter-gatherer ancestors as the “original affluent society.” When he proposed this contrarian idea in 1966, he meant that many of these tribes in favorable settings spent less time working—gathering sustenance—than have many peoples and societies since. The common notion, challenged by Sahlins, was that hunter-gatherer tribes nearly always worked hard to survive on bare sustenance levels, and were fortunate at that. Quite possibly, according to Sahlins, they had significantly more leisure time than many of their descendants, especially their industrial era and postindustrial era descendants.[7]

Hunter-gatherers did not practice slaveholding. Settlement or civilization—private ownership of land, development of commodity production and markets, and scarcity of communal labor—brought slavery into the human story. Wars and the capturing of the conquered increased slaving activities as well. Ancient Babylon and Egypt practiced slavery, as did ancient Greece and the Roman Empire. The esteemed Greek philosophers in their writings covered considerable territory; they wrote extensively about human nature, political society, education, and the acquisition of knowledge. Yet they didn’t have much to say about slavery, except to justify it. Slavery was a thoroughly accepted and unchallenged reality of their day. The paradox of nascent democracy and political freedoms arising from an Athens that was rife with slavery is obvious.[8] The same can be said for the Christian scriptures’ ambivalence toward slavery. A compilation of struggles, history shows itself in grand progress, continuing ambiguities, and certain mysteries.

In the practice and theory of the Abrahamic religions, manual labor, which was the curse laid upon Adam and Eve for their disobedience,[9] has been reviled and redeemed. In the Middle Ages, to be a soldier, warrior, or crusader mattered. Peasants, however, worked with their hands, devoid of significant liberties, tending their lord’s land, mines, forests, and roads. Monasteries, convents, and abbeys required of their inhabitants the same manual labor; because in these settings the work was religious in nature, it tended to be categorized as dignified. Thus, the bane and blessing that is manual labor. Depending on place and status, one person abhors manual labor, another is liberated by it. Chinese of the merchant and ruling classes would grow out their fingernails, protecting them with elaborate casings, to show that they did not have to do manual labor.[10] Slavery and serfdom kept the hands, and fingernails, of the powerful free from the dirt of the earth and accompanying self-degradation.

 

Work Grows Up

Working with one’s hands, and getting them dirty in the process, would eventually change the world as the industrial era dawned. Worker guilds flourished in the later Middle Ages, with the purpose of carefully passing on the skill of a particular trade from one generation to the next. The original trade cartels, guilds helped regulate production and maintain proficient technique. They also helped resist market forces that might erode the proper workmanship of their products with inferior, cheaper, and simpler products. The guilds jealously guarded their “mystery,” or skill; the self-governed guilds not only preserved their own production quality but they also had a great part in transforming European society, which was centered on religious and military activity, to one increasingly defined by economic activity.[11]

Work historian and commentator Richard Donkin credits the hands and ingenuity of the English Quaker Abraham Darby with helping bring about the steady job as we know it today. Darby, who lived only thirty-eight years, was the first of three generations of Darbys with the same name. He held a 1707 patent for casting iron pots. Darby’s principal innovation was to use coke (derived from coal) instead of coal itself for the smelting process of iron ore; the higher burning temperatures resulted in higher quality iron, produced faster and cheaper than what had been accomplished previously. Darby’s cast-iron pots were stronger and not as brittle, yet with thinner walls—a cutting-edge advancement. Darby didn’t work alone; he had regular workers who lived, as did Darby and his family, in cottages in the vicinity of the forge. Darby’s son, just six years old at the time of his father’s death, eventually took on leadership of the work. His innovation was wrought iron—the carbon beat out of the hot metal—which was stronger and more workable than its predecessor. Wrought iron, with its enhanced workability (and cheaper price than that of brass), brought forth affordable and reliable iron rails, locomotive wheels, and improved steam engines. Soon enough however, it was made obsolete by its successor, steel. With regular workers and shares of the company owned and exchanged, the Darby business transformed the way of life in Coalbrookdale, its home base. Money circulated among those employed, and their bartering, unlike before, involved more than simply the exchange of small produce. The Darby family’s enterprise—four generations’ worth—helped transform a job from a specific task with no promise of continuity to a reliable source of continuing employment.[12]

Long before the Darby clan changed the world with their innovations, windmills and watermills did the work of multiple laborers. The mills primarily crushed grain. Their utility later expanded to run saws, spin cotton, manufacture textiles, make paper, and produce other goods. Milling (“to grind”) is among the oldest of the world’s vocations. Miller is one of the most common surnames in the English-speaking world; Mueller for Germans, Molinari for Italians, Moulin for the French. Slaves and hired workers manned mills; as the industrial era gained momentum, more sophisticated mills made some worker tasks obsolete and the terms associated with the word mill acquired a negative reputation. The Luddite movement in early nineteenth-century England is the archetypical example of reactive protest against the changes brought about by new technology. A miller was a processing agent or middle man that added value to grain or cotton or wood. That value could be added to the final product while fewer workers were required to make it was a new and counterintuitive concept. The modern world was emerging.

Mill, as a general term, was so dominant that other factories adopted the same name (cotton and wool mills, steel mills), even though their work did not include grinding grain or other materials. And while certain worker tasks were eliminated as mills evolved, other tasks were added. The Luddites, in a sense, represented the end of the line for the golden age of guilds and their influences. The Luddites dismantled, destroyed, or burned new power looms and stocking frames (for mechanized knitting) installed by owners who simultaneously decreased worker wages. The new machines’ ability to achieve magical and prodigious production led to the dismissal of certain skilled male laborers. They were no longer needed. Women and children were able to do the work needed; they were paid less and their hands were smaller (to better work the looms) and more dexterous than men’s. The Luddite unrest (1811–12) was not able to withstand the oncoming revolution created by the new machinery. Weavers working by hand in England had numbered close to a quarter million in 1820; a generation later, with the advent of mechanization, there were only twenty-three thousand.[13]

Numerous children were taken from poorhouses to work in the mills. Compulsory public education in England and the United States was not widespread until the last decade of the nineteenth century. Historian Stephen Nissenbaum relates how, until that time, children were seen as miniature adults who occupied the bottom rung of hierarchy in the family along with servants. Unless extremely privileged and therefore privately educated, common children apprenticed at a skill, were dedicated to religious life in a monastery or convent, or otherwise worked to help out the household. His engaging history The Battle for Christmas shows the development of the revered holiday from that of the tradition of “misrule,” common in European societies and colonial America, to what we know it to be today, where in secular observance children are the center of attention and charity. The practice of misrule allowed servants, common workers, and lower social status individuals to become kings and queens for a day or short season. During the whole year long, those of lower socioeconomic status lived in deference to the rich and resourceful. During December in the Northern Hemisphere, when the harvest was in, beer and wine maturing, and animals slaughtered for fresh meat, people took time to rest and enjoy the fruits of the year’s labors. It was also time for the tables of fortune to be turned, if only for a moment. According to Nissenbaum, misrule was “the time when peasants, servants, and apprentices exercised the right to demand that their wealthier neighbors and patrons treat them as if they were the wealthy and powerful.” Servants pounded on the door of their patrons and came in for a feast, expecting fresh meat and fresh brew.[14] The colonial Puritans of Massachusetts did outlaw the celebration of Christmas for a spell in the late 1600s; it wasn’t the legendary December birth of Jesus they had trouble with, it was misrule with its tendency to get out of hand. If you’ve ever received a Christmas bonus at a job where you felt you were underpaid, you can see that misrule, for better or worse, is still with us. It’s part of the misrule bargain: accept the once a year bonus and do not grumble about your low pay for the balance of the year, a gift given in exchange for goodwill. The holiday season bonus—and misrule—come just once a year.

The great Chicago crooner Mel Tormé was right: “Christmas was made for children.” However, it didn’t happen until the mid–nineteenth century. Modern Christmas came into being at that time, celebrated privately in homes with immediate family. Children, not peasants and servants, became the focus of the season’s charity and display of social inversion. Misrule was now domesticated.[15] Children gradually came to be seen as we see them today: minors who need to be protected, educated, and developed in order to become properly functioning adults. Only after this modern understanding of children became established did employing children take on its offensive character. Before the 1850s, there were concerns about child labor, but these were in the minority; in general, society was indifferent about child labor.[16]

In the sixteenth century, house chimneys burning wood became commonplace in English homes. In the subsequent centuries, coal replaced wood as the fuel of choice, as wood became scarce. Coal necessitated a narrower flue to make for a better draw on the fire. The smaller flues, however, created a problem for their maintenance and cleaning. The prevailing solution of the day was to use children for the job, boys and girls as young as five years old. Some children were sold by their families, others were stolen. To get the children to do the job of cleaning the inside of the chimney, any number of persuasions was used: the promise of plum pudding when they reached the top, setting straw on fire below them, or pricking their feet with a sharp object. What was called chimney sweep apprenticing back then, we could today call human trafficking. In nineteenth-century England, duping or forcing children of insecure status into sweeping, albeit beyond public view, was fairly common practice. Ironically, at this same time slavery was being abolished in England. The young sweeps suffered from “chimney sweeps’ cancer” (scrotal cancer)—the first occupational cancer—and other ailments.[17] The practice of using children to clean chimneys was finally abolished in 1875; the unfortunate death of a twelve-year-old boy, George Brewster, who got stuck in a flue and was smothered by ash, provided appropriate outrage for British Parliament to pass new legislation. Master chimney sweeps, opposed to any regulation of their vocation, decisively lost the battle on regulation that had been ongoing for close to a century. They would be strictly monitored by police for permission to work—a late nineteenth-century form of a work permit.[18] Donkin comments, “This was not the first time, nor the last, that society and legislators would respond either slowly or inadequately to the social upheavals resulting from technological change.”[19] Coal seemed to be an improvement over wood for burning in fireplaces, but the other associated costs, unforeseen by many, tarnished and sooted progress. The words of Niebuhr again ring true: progress for better and for worse.

The regular job, as seen with Darby’s ironworks in the early 1700s, steadily became more commonplace as the industrial era made headway. Mill and factory workers, meatpackers, railroad workers, refinery workers, miners, and various types of machinists combined to make America the world’s industrial leader as the nineteenth century came to a close.[20] The masses came, and they came to labor. Work was becoming ascendant, it was what Americans did. The English economist and historian R. W. Tawney made the claim that Puritanism—with its emphasis on thrift, frugality, and industry—was indispensable in making America the economic colossus it became. As Rockefeller and Carnegie made their hundreds of millions and the masses labored, the Protestant work ethic became secularized. The history of civilization has witnessed to various times and places where people sought out fortunes. These, however, had always been a miniscule minority. As the Gilded Age reached its apogee, seeking one’s fortune was extended to a much larger congregation. This was good news: many were fed, clothed, sheltered, and blessed by economic growth. But the dark side of economic ascendancy was always there. In the words of Tawney, that dark side was “the uncritical worship of economic power.” He further described it as “the assumption, accepted by most reformers with hardly less naivete than by the defenders of the established order, that the attainment of material riches is the supreme object of human endeavor and the final criterion of human success.”[21] The secularization of the Protestant work ethic meant that salvation, once found by a prudent and pious lifestyle, was now found in secular work. Performed with devotion and dedication, work would bring money, which was the means of salvation in the new secular order.

 

Brave New World of Work

“Work was a divine gift and those who refused it were sinners.” Thus Richard Donkin describes the secularized Puritan philosophy that was a guiding principle for Chicagoan George Pullman.[22] His worker town in the 1890s for his railway sleeping car company was ambitious and well planned; ultimately, however, it was manipulative, paternalistic, and doomed. Pullman’s sense of religious business philosophy was based upon the writings of early English Puritans John Dod and Henry Cleaver, who held that the duty of vocation opposed the sin of idleness, and that the poor were at times responsible for their own poverty. True enough in certain circumstances, but in the possession of Pullman these ideas made for a worker town that was ruled autocratically. Pullman notoriously profited on almost every transaction conducted in the community, from water and gas usage to rent—homeownership was not permitted. In Pullman’s defense, he realized that a direct correlation existed between worker contentment and rising productivity in the workplace. The town of Pullman boasted parks, schools, a boathouse, recreational access to Lake Calumet, and indoor toilets in all the company homes. His worker town was not the slum settlement that nearby Packingtown was, which housed the proletarians working the Chicago stockyards; by comparison it was paradisiacal. With the economic downturn of 1893, Pullman slashed worker wages without reducing housing rents, which led to a strike. The experiment was over, and Pullman was denounced. Hundreds left the worker town and its end was ensured. Pullman, who lived in a lakeshore mansion, died in 1897. Upon interment, his coffin was encased in thick concrete lest any of his legion detractors were to desecrate his grave.[23] His experimental city was eventually annexed by the city of Chicago.

Henry Ford’s assembly line changed work and the world. Ford had a vision, and he followed through on it with alacrity. He wanted common folks to be able to own automobiles. Like Edison and the light bulb, Ford was not the inventor of the car, but he was the one who made it popular by making it affordable. His enterprising assembly line enabled increased production, reduced price, and increased wages for his employees. In 1909, before assembly line construction, Ford’s Model T cost $950. In 1914, the first year of assembly line construction, the cost decreased to $490, with production at 230,788 cars. In 1916 the Model T sold for $360 with 585,388 produced. Ford adapted a line process used by Chicago meatpackers to disassemble carcasses. His workers, previously generalists, now became specialists, concentrating on one or two tasks of the assembly process. Profits were excellent; Ford was able to pay line employees an unheard of five dollars per day for their eight-hour shift (six days a week), doubling their previous wage. This was at a time when most industrial employees were garnering eleven dollars a week for a nine-hour daily shift.[24] One winter during the 1920s on a visit to Ormond Beach, Florida, Ford met a fellow business titan, from the previous generation. It was the aged Rockefeller himself, the two grasping hands in immediate mutual admiration.[25] Can we guess at their initial exchange? Thank you, Mr. Ford. No, thank you, Mr. Rockefeller.

In any discussion of Ford’s assembly line, Frederick Taylor’s influence must be acknowledged. An American born in 1856, Taylor is regarded as the father of scientific management; he specialized in studying worker movements, with the express goal of attaining greater efficiency. Taylor benefited from the recent refinement of an earlier nineteenth-century invention: the stopwatch.[26] In 1881, at Midvale Steel Works in Philadelphia, Taylor made his first stopwatch timings to evaluate worker efficiency. Taylor was employed at the mill, having started in 1878 as a laborer and machinist. Within a decade, after working as a factory supervisor and carrying out further evaluations, Taylor became one of the first of a new breed, the management consultant.[27] His work was extremely influential and the term Taylorism was coined to describe his philosophy and recommendations—the goal of increased worker efficiency attained by repetitive movements and enhanced managerial control. His 1903 publication Shop Management sold well and helped establish his reputation. However, not everyone was taken in by the new “science.” Historian Charles Morris describes Taylor as a narrow-minded, obsessive, and hard-driving plant manager who was better at tool design than he was at management. He also portrays Taylor’s publication as a “splendid example of sham science and spurious specificity run riot.”[28] Taylor’s penchant for lacing his prose with formulaic explanations might have looked impressive in the early 1900s, but it has not stood the test of time. Taylor did contribute to worker and industrial efficiency, but he also helped further worker alienation and degradation. Most any worker was capable of an instituted repeatable physical movement; the monotony of the assembly line challenged workers to use their brains in inventive ways to combat the novel drudgery.

No Ford assembly line would have been concocted in 1914 without the influence of Taylorism. Donkin makes a striking and controversial assertion:

Taylorism and Fordism transformed factory working so completely that the systems together must be viewed as perhaps the most enduring societal change of the twentieth century, arguably more influential and wider ranging than the competing ideologies of fascism and communism, although the destructive impact of these political ideologies is seared far more deeply on the collective memory of the human race.[29]

What the assembly lines produced was and is unquestionably staggering. Ford transformed the twentieth century, purposely making a vehicle that his own employees could purchase affordably and use. Mass production of the automobile further hastened the age of consumer consumption that has defined this society from the 1870s onward. Because of the automobile, people had more choices. Train travel was available, of course, but car travel was new and specific to one’s liking; it gave an individual or a family fresh experiences. Car owners could purchase necessities from different locales now within driving distance, they could go to a different church, they could visit different areas of the state and country where people did things in their own peculiar ways. Places about which people previously had only read and heard now could be seen with their own eyes. Their cars took them father away from home than did their horse carriages, if they had even owned one. Also, people did things in cars that perhaps they weren’t free to do at home—drinking and other types of indulgence. The world got bigger and smaller and racier thanks to the assembly line.

The assembly line, for all the good changes it brought about, didn’t maintain its innocence all that long. It was challenged and judged by popular and literary culture in the 1930s. Aldous Huxley’s Brave New World and Charlie Chaplin’s Modern Times attacked the monotony and tediousness of the assembly line along with its overall dehumanizing aspects. Huxley was especially unrelenting with his various premise and plot arrangements. The Gregorian calendar year in his novel is realigned according to the year Ford started selling the Model T; the year AD 2540 is renumbered 632; anno Domini is replaced with “the year of Ford.” Ford is venerated as a deity; short reflexive prayers are offered: “Thank Ford.” What were once Christian crosses have been modified; without its top vertical rise the Latin cross resembles a T. Huxley’s dystopia recognizes the changes that the Model T’s assembly line ushered in: mass production, consumption, homogeneity, and predictability are revered values. Constant consumption is the bedrock of society; “Ending is better than mending” encourages people to throw out the old and buy new! The Bureaux of Propaganda and world controllers, like His Fordship Mustapha Mond, make sure that things continue to move forward according to function and expectation. The stratified and controlled society is akin to a grand assembly line; the kingdom of scientifically controlled work has risen in the brave new world. Keep at your task, and if it gets monotonous, just take some soma.[30]

Chaplin’s famous character the Little Tramp works an assembly-line job in the 1936 release Modern Times. As if the assembly line job is not enough, the Tramp’s boss has him serve as the first-time subject of an automated feeding machine, designed to make lunchtime more efficient. The machine turns out to be an abject bust. The combined experiences drive the Tramp over the edge. He enters the assembly line conveyer belt apparatus, eventually ending up in the hefty gears of the factory’s machinery. This, the most famous scene of the film, metaphorically portrays the plight of the modern worker in the early twentieth century assembly-line world: chewed up by the changes and spit back out.[31] The plant manager’s surveillance cameras reveal everything the workers do on company time, but they fail to disclose their sense of despair. But the Tramp, as always, survives and perseveres. The assembly line is ascendant, but the spirit of the worker is not to be subdued.

The assembly line fostered mistrust and resistance among workers toward management—in the search for industrial efficiency work began to lose its meaning.[32] Chaplin aptly portrayed these realities in Modern Times; some one hundred years after Ford’s assembly line, that reality continues throughout the world wherever workers serve the doctrine of maximized profits at the cost of their own dignity and safety.

 

One Hundred Years of Déjà Vu

Naomi Klein’s book No Logo depicts, among other things, the plight of workers in the so-called Third World, not working assembly lines but working in sweatshops. Though she wrote in the late 1990s, not much has changed about Third World sweatshops, except for their venues. An astute critique of this particular part of international commerce, most of what she wrote about the sweatshop industry still applies. Many sweatshops, like the Chicago meatpacking industry at the turn of the twentieth century, consist of deplorable working and living conditions. Klein helped expose the shameful conditions of some of the sweatshops that supplied the apparel and shoe industries. A typical arrangement included most if not all of the following descriptions: A manufacturer arranges with a country to set up shop, usually tax-free for the first number of years of the agreement. Local workers, strictly nonunion, are hired, fulfilling the promise of new jobs created. Workers are housed, by requirement, in company-owned housing, where they pay room and board back to the employer. These workers are often young women, typically more timid and pliable than young men. If the host country advocates worker rights too forcefully or withdraws certain concessions, like tax breaks, the manufacturer-employer threatens to up and leave the country—moving equipment and starting the same process in another country. An important realization is that the big companies—Reebok, Apple, Nike, Polo—do not own these manufacturing companies. They contract with them; in effect, releasing some sense of responsibility in cases of worker abuse or violations. The manufacturers, however, follow the big companies’ production instructions precisely. They are not producing solely for the open market. Indonesia, Honduras, China, Mexico, Vietnam, Bangladesh, and the Philippines have been primary locations for exporting process zones, where many manufacturers house together various production capabilities. Klein documents the Cavite Exporting Processing Zone in Rosario, Philippines, a 682-acre walled-off industrial zone home to 207 factories that focused exclusively on export production. Her visit occurred in 1997. As in Chicago’s Packingtown, workers lived just outside the production zone. The fly-by-night workshops, closely crammed together and made of cheap plastic and aluminum siding, were windowless; entry to the production zone was monitored by armed guards, who checked worker IDs. The hustle and bustle typical of a Philippines town (in 1997, Rosario had a population of sixty thousand) was not permitted inside the production zone, as buses and taxis, upon entry, were required to slow down and refrain from using their horns—in the developing world a taxi without a horn is like a pig without a squeal. Cavite, Klein says, felt like a different country. The tax-free economy, isolated from the local governments of the town and its province, was like a “miniature military state inside a democracy.” First World consumers, buying golf shirts, sneakers, and iPads made in the aforementioned work zones, typically feel justified in their purchasing decisions, believing that workers in developing countries have the opportunity to work and participate in the worldwide market: production and consumerism as the two sides of the same coin of salvation. If, however, the working and living conditions of today’s export zones are just as deplorable as those denounced by Sinclair, Pulitzer, Riis, and Steffens one hundred years ago during the Second Industrial Revolution, the same problem simply and sadly has been exported and perpetuated. Klein reported that work days in the various production zones were long, ranging from twelve to sixteen hours a day; child labor was a constant problem, and unsurprisingly, wages were low.[33]

In the fifteen years since Klein wrote, not much has changed. Foxconn, the Taiwanese multinational electronics contract manufacturer, makes Apple products and video game and computer components. Foxconn has numerous facilities in China. Low wages, worker dormitory suicides, and poor working conditions have hounded Foxconn for years.[34] In Bangladesh, a garment factory building of eight stories collapsed in April 2013 killing more than one thousand workers.[35] Not much changed? Maybe things are actually getting worse.

Many manufacturing jobs that used to be in the United States have been moved to these and other foreign locations because of the savings from lower worker wages and the dearth of benefits. Large-scale, worldwide competition has demanded the shift. Manufacturing jobs in the United States peaked in 1979 at just under twenty million; that total has been in slow decline ever since, decreasing to less than twelve million by 2011.[36] The minimum wage in the United States, as of 2012, was $7.25 an hour. Meatpacking is one manufacturing job that has had a long history in America. A number of meatpacking jobs pay better than minimum wage, but as it has been for well over one hundred years, the jobs inside the plants are dangerous and undesirable. As has been typical in the history of commercial meatpacking, the jobs have been filled by immigrants. For the last thirty years or so, packing jobs in the United States have been performed principally by Mexicans and Central Americans, documented and undocumented.

In the early 1980s, coincidental with numerous civil wars and conflicts in Latin America, nonunionized Latinos began to dominate worker rolls in the meatpacking industry. Innovations in the industry during the previous twenty years allowed plants to hire workers who were less skilled due to experience and language abilities. The guiding principle of the innovations was to make the job tasks less dependent upon skill. Sometimes worker tasks were created that required only a single cut made thousands of times during a shift. It was Taylorism ramped up all over again. The packing industry left Chicago and other major markets where unionism was strongest to relocate in rural spots in Iowa, Nebraska, and Colorado, closer to farms and animal populations.[37] The pendulum was swinging back against unions; the percentage of unionized workers in the United States has steadily declined since the mid-1980s (from 20.1 percent in 1983 to 11.8 percent in 2011).[38] Meatpacking had been a job that supported middle class families in previous decades when unions were stronger; now it has become an exclusive partnership between mostly powerless immigrants and leveraging proprietors. With diminished union representation, wages were low (40 percent lower in some cases), and health insurance and paid vacation were offered only after a probation period of six months to a year. Turnover rates were extremely high (100 percent yearly rates not uncommon), but employers didn’t mind, because not having to pay insurance and other benefits made their businesses—on paper—more profitable. In addition, a workforce with a high turnover rate is much less likely to unionize, and, as with the younger and mostly female workforce in export zones, more likely to be reticent. This has been the state of affairs in the meatpacking industry since the early 1980s. High turnover rates are bad for the workers and their families because of the instability it creates, but the high rates are also bad for the communities they live in, due to increased medical costs that are transferred to the community when workers lack insurance coverage. Transient populations are more susceptible to drug use and crime. Established taxpayers in those meatpacking communities rightly protest the misuse of their tax dollars; but their ire should be directed at owner-employers (often unaffected nonlocals) who are essentially using public funds to subsidize what they should be paying their workers in wages and benefits.[39]

One thing that hasn’t changed since commercial meatpacking started is the danger of the jobs inside the slaughterhouses. We Americans like a cheap Whopper or McRib, and meatpackers subsidize it for us with their arms, legs, fingers, and sometimes brains. It is America’s most dangerous job. One would think that from the time (1906) of Sinclair’s exposé, things would have become much safer for meatpackers now due to technological advances in tools and equipment and the implementation of modern safety measures. Not so—the main reason is profit margins expressly related to speed, the speed of the disassembly line. Annually 25 percent of meatpackers in the United States suffer injury or illness requiring more than minor medical attention. As the line speeds up, so does the injury rate.[40]

At a Hormel subsidiary slaughterhouse in Austin, Minnesota, one of the line tasks performed was hog brain harvesting. High pressure hoses forced the brains out of pig heads at the rate of one every three seconds—almost 1,300 heads per hour. Ten years earlier, in 1996, the same plant processed 900 heads per hour. (The pink slurry was shipped to Asia where it was used as a thickener in stir-fry.) Some of the workers manning the pressure hose developed a neurological disease that exhibited itself in headaches, dizziness, loss of motor control, and in extreme cases, temporary paralysis. An eventual diagnosis from the Mayo Clinic neurology department determined workers were suffering from a form of neuropathy. Porcine and human neurological cells are quite similar. When inhaled by workers, aerosolized pig brain cells caused the production of human antibodies, which in turn destroyed some of the nerve cells of slaughterhouse workers. This explanation seemed to make sense, but the employer countered that the disease had not existed in the ten previous years of brain hosing. As it turned out, the culprit was line speed. The disease symptoms had not been noticed until the speed of the line pushed toward 1,300 heads per hour. A worker who was permanently injured, no longer employable at the plant, received a one-time settlement of $38,600—mere pennies on the hog head. Ultimately, the slaughterhouse ceased pig brain harvesting. Whether that decision was made for financial or humanitarian considerations was not disclosed.[41] Maybe the buyers of the pig brain slush started to use corn starch to thicken their stir-fry.

 

The Secularization of Puritan Thought

Slaves, immigrants, and migrants busting their tails as workers: it’s not exclusive to American history, but it’s foundational. Africans picking cotton, Chinese laying railroad track, Mexicans picking fruit and vegetables, following the harvests; these are just a few examples of people coming to this land to work, some by their own volition and others by coercion. This is the land of opportunity; work is the true religion of the land. The benefits of work—a sense of accomplishment and community building, pay, and utilization of one’s physical strength and dexterity and intellectualability—are the promises met for the one(s) adhering to the religion. These are good promises and worthwhile goals and they have served the republic well. If you work hard, you can advance and prosper. But as we well know, there are always those who are not true believers, those who do not adhere to accepted practice and opinion. They’ve been described with various monikers—as lazy, unproductive, and on the dole; as sinners, welfare mommas, deadbeats, and slackers. The Puritans set the framework for this understanding, dating from days prior to the American Revolution. The Puritan solution to poverty, based upon their idolization of industriousness, was diligent labor. The poor had caused their own poverty as a result of their own idleness and propensity toward other vices. Idleness was chief of all vices in the Puritan worldview. Consequently, charity toward the poor was seen to be sinful, because it enabled the poor to continue in their debased idleness.[42] This idea, having its roots in the Protestant work ethic, has been secularized in today’s American society. Drive by any major intersection of a prominent American city where someone is soliciting assistance—food or money—and the following is what you’ll notice: A few drivers, stopped by the red light at the intersection, might be helping out the solicitor with food or money; the overwhelming majority of drivers will be staring straight ahead (or at their phones) as if the solicitors are not there. I’m not judging, per se; I often practice the latter of the two options I’ve described. My point: if commerce is our collective object of greatest importance, then those who are not participating—homeless or shifty beggars—are either ignored or despised as a result of their nonconformity. In a religious understanding of work and commerce, those who are not active participants—welfare recipients, slackers—are the unredeemed on the road to condemnation. Because of the choices they have made that contribute to their current state of nonproductivity (idleness), they are therefore getting what they deserve. For those who are true believers, that is, active participants and working conformists in the system, the act of helping the poor is enablement, as it was for the Puritans. This mindset—based on a specific Christian interpretation—is alive and well in the United States today, having been dominant for some thirty years. Originally religious, the conviction has been thoroughly secularized: if you work and play by the rules, then you eat. On the other hand, if you don’t work and play by the rules, tough luck for you and your family. You are consequently banished to hell on earth (as long as you refrain from working).

This secularization of early Puritan thought has certain beneficial and practical truth. The hedge against idleness didn’t start with the Puritans, but has been around for a long, long time. It’s basic common sense combined with the principle of survival of the fittest, or survival of the least idle. The Christian New Testament records the authoritative figure the apostle Paul: “We were not idle when we were with you . . . Anyone unwilling to work should not eat.”[43] The best interpretation of any text—religious or otherwise—pays close attention to the context from which it was written; the apostle was referring to a small, nascent community that needed everyone’s best effort and cooperation to ensure survival and growth. No one was allowed to lie around idle while there was work to be done! The Little Red Hen agreed with this sentiment; her folk tale is Russian in origin. With its deep connection to the collective human story, the value of personal initiative is one that Americans have passed on to the next generations from the beginning of colonial life. This value, also expressed as “Idleness is the devil’s workshop,” is laden with religious innuendo; its importance is woven into American identity and purpose. There is profound and pragmatic goodness in this shared societal value; like the idea of good and evil yetzer, two related issues—work and hunger—combine forces to play off each other and serve the common good. Americans embrace work because it keeps us from going hungry. My neighbor also needs to work to keep from going hungry. Everyone needs to do their part, and all participants will consequently share in the bounty. Those who don’t do their part suffer their own consequences. It’s an integral part of our social contract.

As with most values, the “work or don’t eat” value has a balancing principle. Just as plants need rain and sunshine in the right balance, one without its counterbalancing principle leads to overkill. The balancing principle for “work or don’t eat” is compassion. Compassion is able to see that some people, including children, don’t eat (or don’t eat well) because of decided circumstances, some of those outside of their own control: famine, disease, corrupt governing bodies, poverty, injustice, family dysfunction. The two principles of self-reliance and compassion keep each other in check; when they work together a broad and flexible balance can be maintained. When one or the other dominates, inequalities result and societal stability is threatened.

Another factor that mitigates the value of work or don’t eat is its susceptibility to racism. A majority of white Americans, shunning idleness, have prized the virtues of self-reliance and frugality for generations since the early days of the (white) Puritans. They could do so since, for the most part, playing by the rules has paid off for them. These virtues have worked for a majority of white people in this society. Blacks, Latinos, and other minorities have not always had the same positive experience, and there is ample anecdotal evidence in support of this claim.[44] Alternative perspectives are acknowledged in a democratic society; the legitimacy of one perspective to dominate all others is perilous. Interplay and interchange between different angles and various perspectives is much more organic and true to the way the world is now and always has been—ideological wrangling is more akin to the marketing ploy that one size fits all. Those who are able must work to eat and prosper; there are, however, multiple mitigating and correcting factors to that general rule. Reality, the further one delves into it, tends toward complexity more than monochromatic ideology or simplicity.

The balance between self-reliance and compassion has been skewed for more than thirty years in the United States, since the late 1970s. The United States dominated the world economy after World War I; much of Europe was decimated and the United States, with gold reserves bulging, didn’t have much competition as the supplier and convener of reconstruction. The US economy was also dominant after World War II; the Marshall Plan helped to officially promote American economic supremacy in a world market where American manufacturers and suppliers lacked stiff competition. This lack of competition led to healthy corporate profit margins, which were taxed significantly in the 1950s and among other things, helped build the US interstate highway system and broaden the social safety net. New Deal influences still held sway. The economic playing field, however, eventually leveled out during the 1960s as Europe and Japan started to catch up.[45] At this same time, the United States was implementing, with great growing pains, civil rights enactments and was being torn apart socially because of its involvement in the Vietnam War. President Lyndon Johnson’s War on Poverty, which continued New Deal objectives, shows the last gasps of an era that espoused some sense of social egalitarianism (indirectly related to economic egalitarianism). In the 1960s a figurative war on poverty was yet permissible. In the decades to follow, however, the tables would be turned and attacks would be waged on those living in poverty, along the lines of the secularized Puritan ideal of work as salvation and idleness as condemnation.

The increased competition in global economic conditions led to decreased profit margins in the United States. This was only natural; for American corporations to treat the post-World War II years as if they were the norm was unrealistic. The ongoing desire to maintain that previously attained profit margin, however, along with other factors, contributed to a reworking of the American social contract: less emphasis on social policy and greater emphasis on fiscal policy.[46] Two important factors buttress this claim: from 1980 on, homelessness in the United States has increased significantly and worker wages, adjusted for inflation, have essentially remained flat. (Disposable income for nonsupervisory workers in the United States peaked in the late 1960s).[47] The reasons behind the first factor of increased homelessness are many and varied, from deinstitutionalization of mental health systems to the failure of large public housing projects, to veterans of the Vietnam War and other subsequent military conflicts falling through widening safety-net cracks. The reasons for stagnated wages, the second factor, are based in an increased stratification of American workers, globalization’s pursuit of low-wage earners, and in the United States, the thirty-year drift away from egalitarianism as a social value. As American society has given more emphasis to profit making, its rich have gotten richer, its common workers have gotten poorer, and more people, including children, live on the streets. Presidents Reagan, Clinton, and Bush (II) all oversaw welfare reform during their administrations. Reform is usually a good thing in a world of changing context and reality. Each reform in its own way moved American society farther away from its New Deal aspirations of the 1940s, ’50s, and early ’60s. In fairness to those three presidents and their supporters, however, welfare reform was seen to be a necessary moderator to welfare dependency. The pendulum does swing both ways.

In 1959 the US government published its first national poverty rate. It was 22.4 percent. This rate was a significant improvement over the rates estimated for earlier decades, according to researchers. Estimated poverty rates for the 1870s hovered over 60 percent; for the first decade of the 1900s they were on either side of 40 percent; and during the Great Depression they approached the higher 1870s’ rates. Great Compression–era economic egalitarianism was beneficial to a majority of Americans: from 1939 to 1959 poverty decreased to the point where 60 percent of American families earned enough income to be lifted out of poverty. (This marker peaked at 68 percent in 1969 and has been falling ever since.) That first published poverty rate of 22.4 percent represented colossal progress for the twenty years that had passed since the Great Depression. Most minority families were left out of that progress, however. Consequently, Johnson’s War on Poverty was egalitarian in that minorities (and poor whites) were to be lifted up economically through greater educational opportunity and health care. The legislation was effective: the US poverty rate hit its lowest all-time mark of 11.1 percent in 1973. This was amazing progress not only for the one-hundred-year period leading up to 1973 but especially for the thirty or so years coming out of the Depression. But then social policy took a backseat to fiscal policy. The memory and reach of the Depression’s dark imprint was waning; compassion’s half-arc swing on the pendulum had maxed out. The poverty rate jumped up to 15.2 percent by 1983.[48]

The unification of big commerce—represented by Wall Street—and government is the specific manifestation seen in the shift from social to fiscal priority that began in the early 1980s. We already saw what happened in the 1920s when the third wealthiest American at the time, banker Andrew Mellon, served as treasury secretary. In the spirit of Alexander Hamilton, Mellon subscribed wholeheartedly to the ideal of profit generation as society’s primary priority, accepting social and economic inequalities as a necessary by-product. This is a legitimate strain in the history of this society; it is, however, extremely vulnerable to abuse, especially when its counterbalancing principle of egalitarianism is diminished. Mellon was unable to tune out profit’s siren call. Undoubtedly with someone like Mellon in mind, The Nation magazine, in 1933, intoned: “If you steal $25, you’re a thief. If you steal $25,000, you’re an embezzler. If you steal $2,500,000, you’re a financier.”[49] I say again, 1929 sounds a lot like 2008.

 

Keynes versus Friedman

The pendulum does naturally swing; the New Deal era—generated partly in response to the excesses of Mellon and the 1920s—wouldn’t last forever. The context from which it was born was transformed and left behind; new influences would offer perspectives for a new context and day. The dominant economic system coming out of the post-World War II era was based on the ideas of British economist John Maynard Keynes. Keynesian economics, formulated previous to the Depression but also forged through the experiences of the Depression, held that the economic future, based upon the interactions of erratic human beings having limited knowledge, was fundamentally uncertain. Prior to Keynes, classic and neoclassical economic theories were more optimistic about human behavior, translated as it was via market control of prices through supply and demand. (Mercantilism—which was heavily tariffed, crown or state run, and the cause of numerous wars—shaped the economic ideas formulated by Adam Smith, the father of classical economics.) The market economy was understood to be self-righting if left to its own devices. The market, the new guiding authority, was a better option than self-interested monarchies and countries. As the nineteenth century and its ambitious progress churned forward, this new understanding and practice of economics was superior to what had existed previously. The impressive wealth of the Gilded Age was the culmination (and for some, the continued promise) of what open markets could accomplish. The Depression, however, significantly challenged that construct; Keynes’s theory, with its built-in suspicion of unfettered market economies, became ascendant. Government has the responsibility to spur economic growth when markets fail and, if necessary, keep markets from becoming predatory. (The Bush bailouts of 2008 and those of Obama that followed, along with stimulus measures, for the most part represent Keynesian thought.)

Keynesianism is a reformation of sorts of classical and neoclassical economic theory and practice. Keynesianism is realistic about collective human endeavor and its potential toward societal harm. The long-term effects of the Depression, although invidious, are Keynesianism’s validation. As the lingering fallout of the Depression waned, however, Keynesianism as the sole legitimate understanding of economic reality also weakened. Its accompanying counterreformation soon followed, and this countervision was much more optimistic—or less worried, at least—about human interactions in the economic realm. Milton Friedman, the 1976 Nobel Prize winner in economics, was an advisor to presidential candidate Ronald Reagan in 1980. At that time, Friedman had recently retired from a thirty-plus year teaching career at Rockefeller’s institution, the University of Chicago. Friedman was the high priest who resurrected neoclassical thought; he dogmatically advocated rolling back regulations, cutting taxes of the wealthy and corporations, and privatizing public enterprises. Just as Keynesianism is still influential in our day, so are the thoughts of Friedman. The battle between the two visions helps define the conflict between those who support neoliberalism, largely based on Friedman’s theories, and those who don’t.[50]

Neoliberalism, for our purposes here, is defined as the political-economic proposal and practice that emphasizes free trade, privatization, deregulation (or minimal government interference toward commerce), low tax rates, and reduced or minimal publicly funded social services. Friedman’s ideas were entirely crucial to the development of the neoliberal vision. These ideas have been put into practice since the mid-1970s; proponents include Ronald Reagan in the United States, Margaret Thatcher in the United Kingdom, and Augusto Pinochet in Chile. Liberalism itself refers to the market theories advocated by Adam Smith in 1776 that signaled the transition away from mercantilism. Neoliberalism is therefore the new understanding and manifestation of classic market theories, updated for the postindustrial age. There is, however, one important addition: the political component. Harkening back to the days of the Gilded Age when government and big business sparred for control of the country’s soul, neoliberalism joins that historic struggle on the side of big business. Though typically aligned with the Republican Party in US politics (Friedman was an ardent Republican supporter), Wall Street and big business have been just as friendly with the Democratic Party since 1980. Presidents Bush, Clinton, and Obama have not done much of anything to dampen the cozy relationship between the White House and Wall Street that gained renewed traction during the Reagan years.

Alan Greenspan, US Federal Reserve Board Chairman from 1987 to 2006, served by appointment under both Democratic and Republican presidential administrations. While a sense of bipartisanship might be construed from his continued appointments (five) by presidents of both parties, another view needs to be considered: mutual allegiance to an agreed-upon goal of ultimate importance—making money. Greenspan’s philosophical apprenticeship at the feet of ideologist and free market fundamentalist Ayn Rand is well documented; Greenspan parlayed her philosophical dogma into an economic one. Markets are to be left to their own devices; this philosophy was deemed best not only for general economic growth but also—and still to our day—undeniably best for financial elites to grow their already substantial holdings. Friedman is classified as a strict monetarist (the supply of money being the determining factor of economic variables within a system). Focusing on money supply, Friedman advocated a laissez-faire approach to and within the market. Greenspan appreciated Friedman’s monetarism and shared his faith in an unfettered market, but he chose also to mediate the market in a way with which Friedman would not have been comfortable: Greenspan was interventionist regarding control of interest rates. For the better part of his chairmanship, he was an effective leader who was able to raise or lower interest rates (the benchmark federal funds rate) as needed to keep inflation in check while encouraging the economy’s growth. Greenspan was universally praised for his leadership. Perhaps he began reading the accolades and favorable headlines too much, however. In the last five years of his leadership, he did all he could do to keep the economy chugging along at its fervent pace by keeping interest rates at record low rates, thus making money cheap and available. He was the “Maestro” after all, who was held in such high regard by companies like Enron. The dot-com bubble had burst in 2000, yet housing sales kept the economy percolating along. But it turned out to be another bubble. During the first six years of the new century, house prices in the United States rose faster than any other time in modern history.[51] In hindsight, we now know that this aspect of growth was distorted and manipulated. The economy that looked so great in the early 2000s was, in part, a sham. Greenspan shares a large part of the blame for this sham interval in the economy. Another part of the Greenspan legacy is that during his watch, the financial industry lost its original and proper role of being the servant of commerce.[52] Greenspan was part of the usurpation the financial industry has wreaked upon other sectors of general commerce. From 1996 to 2005, the financial industry’s (including insurance) share percentage of gross domestic product (GDP) was 7.5. This represented an increase of more than 25 percent from the previous decade.[53] The 1920s saw a similar buildup prior to the Great Depression, when the share percentage hit just under 6.0, a near 100 percent increase from the late 1910s. Business booms from railroads to automobiles to housing to pharmaceuticals naturally increase the financial industry’s share of GDP. But, again, how much is enough? Even after the 2007–08 crash, the financial industry continues to carve out an ever-increasing chunk of the economy. As of 2010, the industry achieved an all-time high of 8.4 percent of GDP, employing some 6.5 million people.[54]

 

What Phil Gramm and Mr. Krabs Have in Common

Democrats and Republicans differ in a number of ways, but they are essentially united when it comes to uncritical acceptance of the ways of Wall Street. Yes, some tea party Republicans and left-leaning Democrats speak out against Wall Street excesses, but the large majority of both parties in Congress offer few protests or legislation designed to keep Wall Street from being its own master. The Dodd-Frank Reform Act of 2010 hasn’t made too many people happy, from those who wanted the act to have more bite to the majority of Republicans in Congress who didn’t support its passage. As we saw during the Gilded Age, when a large number of congressional members are themselves wealthy or beholden to those with wealth, significant challenge to the status quo is unlikely. Whereas 1 percent of Americans were millionaires as of 2010, almost 50 percent of US representatives and US senators were millionaires.[55] That in itself is not an indictment; but to imagine that one’s wealth does not affect the way one votes, especially concerning issues involving personal financial interests is naïve. As an old Russian proverb says, “When money talks, the truth is silent.”

My son and I went to see the SpongeBob SquarePants Movie when it came out in 2004. He was twelve years old; the movie had something for both generations, as the theater was evenly populated by kids and their parents. SpongeBob is the main character in one of the most popular animated cartoons of the first decade of the 2000s. The series features a number of lead characters (all sea creatures); a handful of Internet bloggers revel in the alleged representation of the seven deadly sins in seven recurring characters of the show. Restaurant owner Mr. Krabs (who is, yes, a crab) is especially fond of money, both its procurement and its retention. In the movie, Mr. Krabs decides to open a second restaurant adjacent to his original one. At its grand opening, he confesses his love for money with an interviewer, as he is asked what inspired him to duplicate his efforts. He answers the question instinctively with one word: “Money.” Sometimes it’s the jolt that comes from a change of scenery—in this case a cartoon—that helps one to hear the truth loud and clear. The implied mocking of Mr. Krabs’s greed drew one of the largest laughs in the theater that day; even children are able to recognize that the inordinate love of money skews a person’s—or a crab’s—perspective.

As with Andrew Mellon in the 1920s, the 1980s marked another time (continuing to the present) where Wall Street types or financiers served as secretary of the treasury in the US government. Donald Regan (secretary of the treasury, 1981–1985) had previously worked for Merrill Lynch, Robert Rubin (1995–1999) had been co-COO for Goldman Sachs, Henry Paulson (2006–2009) had been CEO of Goldman. Larry Summers (1999–2001) is described as having been mentored by Rubin; Timothy Geithner (2009–2013) has close ties with Summers and Rubin. Greenspan, Rubin, and Summers famously “saved the world—so far” according to a Time magazine cover article in 1999. The repeal of the landmark 1933 Glass-Steagall Act (separating the activities of commercial banks and securities firms; lessons learned from the crash of 1929) was highly promoted by the three saviors. The repeal, the 1999 Gramm-Leach-Bliley Act, once again allowed for commercial banks to engage in securities transactions and for securities firms to “become” bank holding companies. Cosponsor Phil Gramm (Republican senator, Texas) echoed Friedman-like ideology upon its signing into law by President Clinton: “We have learned that government is not the answer. We have learned that freedom and competition are the answers. We have learned that we promote economic growth and we promote stability by having competition and freedom.” History, some ten years later, wasn’t kind to Gramm’s pontifications. Many commentators include the repeal of Glass-Steagall as another major contributing factor to the economic troubles that began in 2007.[56] Gramm seemed to have spoken eloquently at the bill signing, using words like competition and freedom. Truth be told, Gramm is not much different than Mr. Krabs—what he seems to be saying is that he too really likes money.[57]

According to Yale University economics professor emeritus Charles Lindblom, a major flaw in American democracy is that market elites have special access to political elites. It takes money, today more than ever, to win political election. Consequently, political elites give ample attention to market and business elites (and their political contributions); there is a definite symbiotic relationship between the two. Presidents-elect, whether Democrat or Republican, now meet with corporate leaders before taking office; this practice is not something accorded other groups. It is an obvious affront to political egalitarianism. Is the trade-off worth it? Jobs and the predominance of industry are in the balance. Democracy is downgraded, in a sense, in order that the market be its most robust. The battle of control between commerce and government, waged in earnest since the days of Rockefeller, has squarely sided with big commerce for more than a generation. And to speak of the market and democracy as if they go hand in hand is disingenuous. In that market systems produce inequalities of income and wealth, Lindblom warns, they can obstruct democracy.[58]

The market system is more than an economic system, according to Lindblom. It’s a system that serves as a method of controlling and coordinating people’s behaviors. More complex than Adam Smith’s earlier conception of the market as consisting of individual actors following their own self-interest, Lindblom describes the market as people “tied together” through myriad social and commercial interactions. The market gives freedom, but it also constrains and constricts.[59] The next time you go to the grocery store and try to decide which crackers to purchase from a selection that might run up to forty-one brands and 139 varieties,[60] you’ll understand that both Smith and Lindblom are describing market realities. Was it the feeling of freedom or constriction that made you walk out of the store with not one, but three or four varieties of crackers?

The market system is simply the best and most detailed organizer of social cooperation that has existed in the history of the world. More than two billion people worldwide cooperate as people drink their morning coffee made from Colombian beans, have a technical support call for their laptop computer serviced from the Philippines, or go out to enjoy nightlife dressed in Italian leather shoes. This organization or grouping of participants is the world’s largest, besting the Roman Catholic Church or the nation of China. Only the two gender groups within the human family are larger. Yet, Lindblom cautions, the market system is not an autonomous entity. Government aid and support thoroughly prop up the market system. “If the market system is a dance, the state provides the dance floor and the orchestra.” Sociologist Barrington Moore generalizes that up until the nineteenth century or so, the best way for ambitious types to pursue power and riches was through force and violence. Alexander, Caesar, and Genghis Kahn come to mind. In these last 250 years of world history, ambitious types have achieved great power and greater riches through the market system of economic exchange. The former method was generally violent, the latter—on the surface—generally peaceful.[61]

But as the saying goes: all good things in moderation. Rockefeller, Carnegie, and Morgan made their hundreds of millions thanks to the market economy, but their business practices were eventually challenged by antitrust measures. A healthy democracy allows for balance so that disproportionate powers are not able to dominate to the point where competition no longer exists. Big corporations, because of the power they wield politically and socially (the ability to offer jobs), have the potential to sabotage or bring an end to democracy. This is not a far-fetched assertion. Big corporations—even though publically held and vulnerable to downward market swings—can be understood to be authoritarian systems operating within democracies. According to Lindblom, many big corporations “exercise powers inconsistent with democracy . . . and play the role of an oversized, greatly powered citizen.” Popular control of corporate (and other) elites—through the vote and the establishment of laws—is a defining characteristic of democracy. Democracy or “reverse dominance hierarchy,” which we saw earlier with primates who practice a form of egalitarianism, needs to be defended in this current day, not against communism or other types of economic organization but against corpocracy. Lindblom states it succinctly: “It is the large enterprises that pose obstructions to political democracy. Through their spending and their relations with government officials they exercise much more power than do citizens. Their power swamps the power of all but a few enormously wealthy citizens.”[62]

We’ve heard it said “that government is best which governs least.” There’s a lot of good sense in that statement—nobody needs overbureaucratized intrusions or expectations put upon their lives. But it’s a tough balancing act in this post-9/11 world, the best (or worst) example being airport security lines; we want safe airline travel, but we resent the extra waiting in line and the body and luggage checks. When Henry David Thoreau penned the above quote in 1849 (not original to him), the American nation was in many ways still in its formative stages. Built upon a foundation of antimonarchism, American society was more socially egalitarian in theory than in practice. Thoreau was particularly agitated by two things: slavery and the Mexican-American War. Because it resulted in additional land for the United States and increased the number of southern and slaveholding states, the war was a contributing factor in bringing about the Civil War a little more than a decade later. Civil Disobedience was Thoreau’s commentary of disgust for his own government that was expansionistic and suppressive. If Thoreau were alive today, would he have more repugnance for government or corporate overbearance? One imagines the early environmentalist would be more disgusted with the corporate malfeasance that has been rampant since the 1980s. And more than that, one imagines his rebellion would be squarely directed toward corporate entities motivated by one thing: profit.

Since the mid-1970s, Phil Gramm and many other free market types have been explicit in their embrace of neoliberalism’s concepts, throwing in Thoreau’s line (creating a new context for it), whereby freedom becomes code for making money. The government, co-opted and subdued, does best to stay out of the way. Regulations, impinging upon the freedom to make money, are as undesirable as a short sale on a rising stock. Yes, profit is to be sought, but when it’s the utmost goal of an individual or corporation, trouble is never too far removed. Historically, the profit motive has stood against regulations on child labor, excessive work hours, occupational work hazards, and unequal gender pay for equal work.[63] Yes, governmental regulations sometimes inhibit profit making. But in a democracy, you and I do not have to live in subjugation to a corporate power that places profit making over other objectives. Government doesn’t need to get out of the way so that corporations can reap maximum profits above all other considerations; government puts a brake on unmitigated greed and keeps the playing field balanced. But government has a hard time doing that when it is beholden to Wall Street. Phil Gramm calls Wall Street a “holy place.”[64] Goodness only knows: Would Jesus ring the opening bell on Wall Street? Or, perhaps, the final bell?

Sandy Weill, the former CEO of Citigroup, was the most visible corporate proponent for the repeal of Glass-Steagall. The 1999 signing of the Gramm-Leach-Bliley Act saw President Clinton hand Phil Gramm the pen he used to sign the bill into law. Four others received commemorative pens from the signing, one of them being Sandy Weill. The pen, a small accoutrement, wasn’t enough, however, to commemorate Weill’s efforts. Weill also prominently displayed in his office a four-foot wooden plaque featuring his portrait with the descriptive title “Shatterer of Glass-Steagall.”[65]

In 1998 Weill, fresh off the merger and acquisition of his own Smith Barney Shearson with Travelers Insurance Company, was ready for further growth through consolidation. Even though Glass-Steagall legally separated insurance, commercial banking, and investment banking at that time, Weill had Citicorp Bank in his sights. Citicorp, headed by CEO John Reed, was one of the biggest banks in the world. A merger between the two equivalently valued companies was desired by both CEOs; Glass-Steagall, however, stood in the way. Weill set up a back door meeting with Alan Greenspan, the Federal Reserve chair, who certainly would have had the power to deter the proposed merger. He didn’t. As a matter of fact, Greenspan was downright unconcerned about the brazen move, commenting to Weill, “It doesn’t bother me at all.” We’ve heard of so-called activist judges who try and change the law from the bench via their rulings; this was a case of an activist Federal Reserve chair essentially bypassing Congress concerning a law that had been on the books for more than sixty years.[66]

Many in the financial industry saw Glass-Steagall as a dinosaur of sorts. European banks were already offering multiple services (saving, investing, and trading); the American financial industry was at risk of falling behind the competition. This viewpoint was assiduously shared in the halls of political power in Washington, DC. Phil Gramm wasn’t alone—Chuck Schumer (Democratic senator, New York), Tim Johnson (Democratic senator, South Dakota), and Al D’Amato (Republican senator, New York) all stumped for the repeal of Glass-Steagall. President Clinton already supported the repeal; he had been informed in April 1998 by to-be Citigroup co-CEOs Weill and Reed of the inevitability that Shearson Smith Barney and Travelers would merge. How kind of them to inform the president—a classic case of the tail wagging the dog. Robert Rubin, Clinton’s treasury secretary, was also on board with the proposed merger and the sidestepping of Glass-Steagall. It is clearly seen that big commerce—Wall Street’s financial industry—and government were no longer joined at the hip, they were fused into one entity. Greenspan’s Fed, on October 8, 1998, gave Weill and Reed two years in which to operate legally as they waited for the repeal of Glass-Steagall. Reed says that they were assured (by the Fed) that repeal would happen, without any doubt. In October 1999, as the maneuverings for Glass-Steagall’s repeal were being worked out, Citigroup made a splashy hire: Robert Rubin. Just a few months removed from service as US Treasury Secretary, his new role was “office of the chairman” and go-between for the two new co-CEOs.[67] Rubin was paid CEO money (more than $100 million during an eleven-year span), and he outlasted Reed and Weill.[68] He didn’t see the crash of 2007–08 coming, however; perhaps all those shards of broken Glass-Steagall, representing the death of superfluous regulation, blinded him as they reflected the sacred light emanating from the icons of freedom and competition.

Weill didn’t make his money the old-fashioned way by business innovation or new products or entrepreneurial skill. He made it by growth through consolidation (and cutting jobs) in the financial industry. Size was his great advantage, and consequently, he had more control of wages, costs, and prices. Weill helped bring about the era of “too big to fail.” In 1999, when the Citigroup merger was completed, the biggest ten banks in the country controlled 45 percent of all banking assets as compared to only 26 percent ten years earlier. For those in the largest of companies who were so motivated, pursuing short-term profits was a no-brainer. There were no constraints, except moral ones, to make as much money as unethically possible. Rockefeller and Carnegie would have approved.

Reed and Weill shared CEO duties at Citigroup for only a year and a half.[69] During that time, Weill introduced Reed to a new culture he wasn’t entirely comfortable with: get as rich as possible. Part of Weill’s previous pitch to Reed—when advocating the potential merger—was “we could be so rich.” Before the merger of Citicorp and Travelers, Reed’s largest year-end bonus was $3 million. After the merger, his year-end bonus was $15 million. He admits it was excessive; he was the same guy doing the same work that he had done before. He now deserved a year-end bonus five times what he previously had received? According to Reed, this was the culture “developed by Wall Street.”[70] “Just a little bit more”—literally—since Reed’s total compensation that year was $290 million. Weill’s was $225 million; they were the two highest compensated CEOs in the country.[71]

 

In the Market We Trust

Religious fundamentalists—Jewish, Christian, Muslim, and others—have in common, first and foremost, an adherence to a system of thought and practice that in their point of view emanates from the sole source of objective truth. This in turn leads to a dogmatic conviction of correctness; they refuse to have their ideas challenged, examined, or criticized. Atheists can also be fundamentalist in the convictions that define their rejection of deity; and as we’ve seen previously, scientists can be fundamentalist also. One other group needs to be added to the category: market fundamentalists. These believe that the Market will provide all that we need and that the Market, served by its main deputy, unfettered free enterprise, is all-knowing, all-seeing, and wholly (phonetic pun intended) worthy of worship and praise. American historian of culture Thomas Frank calls this conviction market populism. Market populism is the idea that markets express popular will better than democratic elections and that they are accessible to all, regardless of gender, ethnicity, or social status. Market populism is also the devotion to see the market as more than the natural product of human endeavor or a phenomenal tool by which to shape society—the Market as a religious ideology. [72]

An important component to this article of faith is the idea that markets and democracy go hand in hand, as if they were a match made in economic heaven. Not so fast, according to Frank, Lindblom, and others. Political scientist Benjamin Barber has a long history of being able to differentiate between the two. “Democracies prefer markets but markets do not prefer democracies. Having created the conditions that make markets possible, democracy must also do all the things that markets undo or cannot do.”[73] Markets are mostly interested in profits; left unchecked, they produce the parallel results of the Gilded Age, the 1920s, and 1990s and 2000s. Democracy and those that participate in democracy have the responsibility to keep the darker sides of the market—the siphoning upward of wealth and the power it can create for elites—in check. The Progressives at the turn of the twentieth century did so, as did the egalitarian spirit of the 1940s and ’50s. To the contrary, few people are willing to stand up to the darker edges of the market today. Again, critique does not mean rejection. But when market fundamentalists come across a correcting or antagonistic word toward the Market, defenses are marshaled and the name-calling commences: “socialist,” “anticapitalist,” and the like.[74] As with a religious fundamentalist, a market fundamentalist doesn’t need to debate or converse; the conformist value from the 1950s (for older Americans) combined with the fall of the Berlin Wall in 1987—the emblematic demise of communist USSR—still lingers and has fostered an environment where sober examination of market forces is unbecoming. Victory over communism has meant that, for some, the debate is over. Similarly, after both world wars when the United States lacked economic competitors, we see that temporary superiority doesn’t necessarily mean continued hegemony. Lindblom agrees: “The steady and indiscriminate overendorsement of these [certain] virtues is supplemented in every period by additional messages relevant to the culture of the time. At one time they taught the divine right of kings; in our time they teach the doctrinal correctness of capitalism.”[75]

With its market forces, capitalism is the economic system that is currently ascendant. It has benefited the peoples of the world over and again with the creation of wealth, the advancement and proliferation of foodstuffs, and the exchange of products. Undoubtedly, capitalism is the best economic system produced thus far in the history of the world; but, it does have a dark side. Veblen named “conspicuous consumption” god during the Gilded Age; it’s fairly easy to argue that since the 1970s the god has made a comeback that has not abated. Maxed-out credit cards, an overheated economy based on borrowing and spending, and a culture of propaganda and advertising that encourages continual consumption are defining characteristics of the current era. But buyer beware: in the United States we might have more people addicted to shopping than to drugs and alcohol. Free market proponents prefer to let individuals in a market society make their own mistakes—as opposed to a government or centrally controlled economy deciding what might be best for individuals. Persons inevitably will make bad choices; they can learn from these poor choices, and what is equally important, in the process their personal liberties are upheld. The market chooses winners and losers, all done within the context of “freedom.” But the concept of freedom needs to be questioned. How free is someone who is addicted—to cocaine or shopping for shoes? When a person consumes because the pursuit of material possessions is desirable to the point of obsession—purchasing being the lynchpin of the continuous cycle—how much freedom exists therein? And for those who are not addicted to shopping or acquisition but live in consumer society—how do they determine whether their purchases are free of constrained motives?[76]

Marketing is mainly about creating desire for consumption. William Cavanaugh, professor at the University of St. Thomas in St. Paul, maintains that what primarily defines consumerism in the United States is not consumption but the pursuit or desire for consumption. As with an addict obtaining a drug of choice, the act of purchasing doesn’t ultimately satisfy. When I buy a new widescreen TV, I might have a week or two of excitement because of the purchase, but that feeling will pass. The cycle soon starts again, and the pursuit calls. Besides that, we live in an increasingly throwaway society. Yes, many of us recycle paper, plastics, aluminum, and tin cans. But in our brave new world economy, we’ll also buy a new DVD player because fixing the old one costs as much or more than buying a new one. Living in a consumerist, throwaway society is a great place to learn how to continually acquire and toss out more and more and more stuff. Huxley was prophetic: “Ending is better than mending.” Consumerism is a type of spirituality in American society; it is for some a primary way of achieving meaning and identity.[77] We are the hungry ghosts, and because we continually want just a little bit more, we are hooked.

 

Homo Economicus: Worker-Believer-Consumer

As I have argued, the combination of commerce, materialism, and consumption is our bread and butter religion in the United States. It’s a good religion for the most part; it has defined who we are as a people and society. This religion has fed us, clothed us, sheltered us; it’s our ingrained sense of purpose. It has modernized the world and lifted millions out of poverty. Its development coincided with the beginning of the industrial age; work and America have had a great partnership for some 250 years, and even longer, going back to the colonial days. That which conveys a sense of ultimate importance is one’s religion; work does that for Americans, because it is our means of materialistic consumption. Like any religion, however, this one has had its dark sides and difficulties: slavery, child labor, unfair and abusive labor practices, gender and racial discrimination. Answering the challenges of these injustices, it has adapted and modernized. Schools serve its purpose by educating continual new waves of workers; Americans, newly meeting one another, will ask each other the basic question, What do you do? Most of us know what the question is getting at; work is arguably the main form of self-identity in American society.[78] And as we saw before, if you can’t answer the question forthrightly, woe be upon you. Work is the basis of our religion and our identity, the latter enhanced and defined by what we purchase and consume.

Social ethicist David Loy describes, in ways similar to Tillich, religion in functional terms. A religious system, understood functionally, can center adherents by teaching what the world is and what their role in the world is to be. The value system of consumption—delayed gratification long ago usurped—dominates because we understand the world, postindustrial era, to be exploitable. Take a three-hour car trip, for example, with a group of young adolescents who have money in their pockets, stop an hour into the trip at a roadside quick mart, and see if they are able to suppress the urge—nay, compulsion—to buy soda or candy. I’ll bet you a bag of Sour Patch Jelly Beans that the majority of them can’t! Loy calls our present economic system our de facto religion, binding disparate peoples together with a common worldview and a set of values grounded in consumption. Consumption is of course a secular activity—yet we carry it out with religious devotion. Loy calls consumption the most successful religion of all time, winning more converts more quickly than any other previous belief system or value system in the history of the world. Work and consumption go together; the “theological” system that explains their workings is economics.[79]

Loy and others tell how economists (with support in the business world) in the last four decades have striven to make economics more scientific in its explanations. A cursory study of the development of economics in this time frame reveals its scientific aspirations (echoes of Taylor) as ergodicity (consistency in a system over time), natural rate theory (no permanent correlation between employment and inflation), and econometrics have come to be assumed concepts. In practical terms, that there is a full employment rate, that value is always adequately indicated by prices, and that the market is ultimately fair are some concepts that recent economic theorizing assumes to be actualities. To make for a more scientific understanding of economics, certain variables of human behavior such as swindling, corruption, and overconfidence—Keynes’s “animal spirits”—have been deemphasized. Neoclassic economists (Friedman is the best representative) have made the case that since we now have a better (“scientific”) understanding of economics, we are safer from the ravages of downturns and negative cycles.[80] This type of thinking took a significant blow from the economic difficulties that started in 2007–08. It takes a quixotic faith to believe that all participants in the market, as claimed by neoclassicists, are making rational decisions based on purely economic motives. Humans certainly have the capacity to make rational decisions based purely upon economic motives; but as Niebuhr implied, humans also, without question, have the capacity for irrationality and malfeasance (not to mention indifference) in economic and other decisions. Economics will always be confined to the realm of social science; enlightened and studied, yes, but a blend of fact and opinion.[81] A Nobel Prize is offered in economics—bettering philosophy, religion, sociology, political science, and psychology—yet, to treat economic understandings as if they are “scientific” and unassailable is an act of faith.

Does our understanding and practice of economics—neoclassic style—fulfill a religious function? Loy answers in the affirmative with the following assertion:

The global victory of market capitalism is something other than the simple attainment of economic freedom: rather, it is the ascendancy of one particular way of understanding and valuing the world that need not be taken for granted. Far from being inevitable, this economic system is one historically conditioned way of organizing/reorganizing the world; it is a worldview, with ontology and ethics, in competition with other understandings of what the world is and how we should live in it.[82] (Italics mine.)

Economists George Akerlof (2001 Nobel Prize in economics) and Robert Schiller also answer affirmatively that our economic understandings and practices are quasi-religious:

This New Classical view of how the economy behaves was passed from the economists to the think tankers, policy elites, and public intellectuals, and finally to the mass media. It became a political mantra: “I am a believer in free markets.” The belief that government should not interfere with people in pursuit of their own self-interest has influenced national policies across the globe. In England it took the form of Thatcherism. In America it took the form of Reaganism. And from these two Anglo-Saxon countries it has spread.

Akerlof and Schiller make it clear that this belief is a type of religion with pitfalls intact. “Yes, capitalism is good. But, yes, it does have its excesses. And it must be watched.”[83] As a people, we’ve learned that governments, even democratic ones, are not to be trusted entirely: Watergate. Politicians must be watched; as a consequence, we participate in public life as citizens, protest if necessary, and vote. But to believe that markets are unblemished and infallible, or more realistically, that markets, blemished and fallible, are not to be interfered with, is to have made peace with the modern spirit of profit motive before all other things.

Karl Polanyi, the Hungarian political economist, lived through the same societal experiences as did Paul Tillich: both were Continent born in 1886 and profoundly shaped by the events of two world wars. Polanyi’s main work, The Great Transformation, was published in 1944. It describes the economic and social transformations in the nineteenth century that were brought about by industrialism and the prevailing market economy. Although the work has been legitimately criticized over the years for an overly romanticized view of past cultures, its staying power has been its critique, by contrasting description, of modern market economies. “Instead of economy being embedded in social relations, social relations are embedded in the economic system.”[84] The profit motive of modern society has the potential to disrupt social relations. Again: how many of us have had a disagreement of a serious nature about money—loaned or owed—with a family member? These disagreements and disruptions have been around for as long as there have been families, to be sure. In our modern society money and economic status continually define relational values on their own terms. Ancient Israel, as we saw previously, used a system of jubilee laws to protect the whole community against inner-tribal exploitation. Whether they carried it out (mostly they didn’t) is not the issue; the expectation was put in place so that social relationships would be based on things other than economic status and wealth. Polanyi rightly argued that capitalist society has an overt tendency to define relationships, individual and communal, on its self-imposed foundation of economic gain. Homo economicus has evolved and so has her religion: if we so believe, we can all be rich. Is any other pursuit truly legitimate in this life?

 

The Unlimitedness Delusion

A delusion is alive and well in America; it is the supreme myth no less of modern capitalism: economic growth is unlimited. And because growth is unlimited, everyone has a chance to attain riches. Social critic and author Barbara Ehrenreich spent part of a year as an unskilled worker, somewhat undercover, and in the process wrote Nickel and Dimed, her best seller that examined American prosperity from the underside. She worked as a restaurant server, hotel maid, cleaning-service employee, nursing home aide, and Walmart clerk. Sometimes she held two jobs at once to get by. When terminating her job with the cleaning service (where the cleaning crew labored in the homes of the well-to-do), she comes out to her fellow employees, explaining her motive for the book project. In her confession, she asks her co-workers their opinions about the disparities between the wealthy, who have so much, and regular workers like themselves, struggling to get by. She was surprised by similar answers from two co-workers: for different reasons, both respondents were okay with the disparities. One, younger, said she was motivated to continue to work so that “someday” she could “have this stuff” too. The other respondent, older, said she wasn’t motivated to have the possessions of the well-to-do that they saw in the houses they cleaned. All she wanted was a “day off now and then . . . if I had to . . . and still be able to buy groceries the next day.”[85] If classifying this second respondent as somewhat agnostic toward our proposed religion—commerce, money, and the pursuit of possessions—seems fair, there shouldn’t be much protest against calling the first respondent a true believer. Grand disparities are tolerated because there is no perceived limit to the continually expansive nature of wealth available in capitalism; all, consequently, have the opportunity to pursue and attain it. A late 1990s (contemporaneous with Ehrenreich’s book) U.S. News and World Report article on wealth disparity stated it succinctly: “Americans don’t always love the rich, but they harbor the abiding hope that anybody can become prosperous.” It’s practically sacrilegious to opine that the American Dream of limitless opportunity is flawed.[86]

Where this myth most fervently thrives is on Wall Street, where money mostly serves itself. Because potential growth (as the myth proclaims) is unlimited, Wall Street types using money to make more money, with a lack of social purpose, is tolerated. There’s something American about someone being able to make a fortune while the rest of us stay out of that someone’s way. Rockefeller and Carnegie did it this way; and who’s to say that the rest of us can’t do likewise? As long as we aren’t inhibited by unnecessary regulations, restrictions, and redistributions, we can achieve even beyond our imaginations. This mentality helps explain how middle-class Americans come to tolerate and support what happens on Wall Street. The average American sees the evidence of government regulations in many matters of everyday life. Taxes are paid daily on transactions, at times quarterly and yearly to the IRS; permits or licenses are needed to drive a vehicle, construct a deck on the back of a house, fish, hunt, or buy a gun. Most of these regulations are tolerated, but then an overbearing regulation confronts us by making a mockery of common sense: to put up a lawnmower shed in my backyard, for example, I’ll need to pay fifty dollars to secure a permit from a paper pusher in the county office. In such a particular case, and in those similar to it, governmental ubiquity is glaringly cumbersome. While we understand that tax dollars help build roads and bridges, educate our children, and care for the elderly, we’re also susceptible (in this post-Reagan era) to the idea that regulation and taxation, sponsored by the government, sometimes go too far and are somehow inherently evil, even anti-American. Consequently, this same line of reasoning is applied to Wall Street and how it operates. We think the local dynamic (overbearing regulation) we sometimes experience must operate the same way at a place like Wall Street. The worst case scenario is that government regulator wonks—who don’t understand all the intricacies of Wall Street—get in the way, with their regulating control and intervention, of Wall Street machinations. If that happens, then that which could be unlimited is unnecessarily limited. Money to be created and made is squandered.[87] The American Dream is compromised by curtailing opportunity. So goes at least the rationalization of American unlimitedness writ large for Wall Street and other points beyond. To hinder money making at its center core—Wall Street—is to defy it elsewhere.

 

Social Immobility

Social mobility refers to the movement of individual or group social position from generation to generation related to changes in income. It is also called economic mobility. America, the society that was forged to be different than aristocratic and stratified Europe, historically has had a high level of social, or economic, mobility. From the descriptions of egalitarianism and opportunity by Alexis de Toqueville to the rags-to-riches stories of Horatio Alger to the rise of the young Rockefeller (whose father was rarely supportive and mostly absent), America has been defined by greater social mobility than other societies the world over—until now.[88]

Data from various sources increasingly show that economic mobility in the United States is lagging behind that of similar developed countries. More so today than a generation or two ago, your chances of being well off are good if your father was well off; similarly, you most likely won’t be well off if your father was not well off. It is three times more likely today as compared to the 1960s, ’70s, and ’80s, that your father’s income level, for richer or for poorer, will determine your own income level. Five Western European countries and Canada, all having less income inequality than the United States, boast much greater economic mobility than the United States. The land of opportunity is increasingly becoming a blessing for those well born and a curse for those who are not.[89]

According to a 2009 Pew Economic Mobility Project report, men in their thirties earned some 12 percent less than their fathers did at a similar age. The factors to explain these numbers vary from the decline of unions to increases in immigration. Simply put, rising income disparity and decreasing social mobility are becoming more entrenched in the United States. Even after the 2007–08 crash, one survey proclaimed that 71 percent of Americans believe that hard work and personal skill are the two main components for economic success.True enough for some, but not for all. Individual stories (politician Marco Rubio is fond of telling the story of his own family’s rise) illustrate that belief in the ideal continues. Multiple stories (represented in many of Barbara Ehrenreich’s co-workers in Nickel and Dimed) counter its reality, but they are oftentimes dismissed by the rhetoric—yet alive today—put into law by the Massachusetts Puritans so long ago: the poor can change their lot by working harder. True in a certain sense, but it’s simply much easier to be well-off, and maintain that status, if one is born into that status. The myth of unlimited growth availing riches to all (who work hard enough) is a one size fits all legend. It is true and has been true for many; but for just as many, if not more, the legend is no more than false hope. Those who didn’t have the foresight to be born rich are increasingly out of luck in today’s America of social immobility. Some realities are more complex than the myths that attempt to explain them and give them meaning.

The exaltation of unlimited growth and social mobility is a chimera that, in American society of the last forty years, truly has been “the opiate of the masses.” Karl Marx claimed such despised status for religion, of course; the American fixation upon the ability of all to attain wealth is a value that has become—in this day of social immobility—bad religion. Capitalism, in a larger sense, is more than an economic system; it is a culture or way of life. It has its grand advantages and benefits, socially and economically, for so many. But it does not have unlimited powers; it is not transcendent.[91] Whereas all religions and cultures teach and attest to the limited nature of human beings and things material, the religion of the Market claims otherwise: There is never enough.[92] The religion of the Market offers heaven on earth; work hard enough and it all can be yours. If you choose, however, to slack off and not work hard enough, you get what you deserve: hell on earth. To quote Tawney again, “A society which reverences the attainment of riches as the supreme felicity will naturally be disposed to regard the poor as damned in the next world, if only to justify making their life a hell in this.”[93]

 

The Mall as Holy Place

If our true national religion is the confluence of commerce, materialism, and consumption (all market components), what then are our modern day places of worship for this religion, places that call people together and reinforce the belief system? One modern day abode of gathering, analogous to churches and synagogues, is undoubtedly the shopping center and mall. The three components of the religion—commerce, materialism, and consumption—define precisely what happens at shopping locales, malls being the archetype. Sports stadiums and manufacturing plants (some in the category of cathedrals) are also included as places of gathering and worship. In the words of sports savant Frank Deford, stadiums were places of “essential democracy . . . the arena made for a grand public convocation, a 20th-century village green where we could all come together in common excitement.”[94] A resolution of winners and losers—a subtheme of the national religion—drew in the faithful (and still does). In the stadiums of yesteryear, corporate executives and blue-collar workers sat in proximity one to another, waited in the same lines to purchase the same beer and the same hotdogs; unwelcome rainstorms dampened richer and poorer alike. Those idyllic days are long gone; the Dallas Cowboys installed luxury suites in Texas Stadium in the 1970s—beginning a new trend—and life inside and outside sports arenas reflected the same reality: the privileged elite began to separate itself from the rest of the crowd.[95] Consequently, stadiums, as alluded to by Deford, have lost their identity as the best representatives of the national religion. Even though sports stadiums hold a special allure—recognizable to a large part of the general public because of television—they increasingly restrict the public’s access as ticket prices (and related costs like parking) have escalated greatly in the last few decades. And besides, not all Americans are enamored with sports. Manufacturing plants are further removed than sports stadiums as the best representatives of the national religious gathering: they are significantly less recognizable to the general public, mostly inaccessible, and subject to relocations—stateside and abroad. Shopping centers, malls, and big box stores (Walmart, Home Depot, Best Buy, IKEA) are, on the other hand, accessible to the majority of Americans, nationally (if not globally) recognizable as chain retailers and open for business in most cases 365 days a year. In the last forty years, American real estate allocated for shopping has increased twelvefold, with a total tally of more than (2009 figures) 105,000 shopping malls, or centers, or strip malls. This is not to say that shopping, a necessary and beneficial activity for all Americans, is an evil enterprise. Shopping is what we do; rich or poor, the masses make the pilgrimage many times a month (some many times a week) to these temples and lay their money down. More than ten million people are employed at America’s shopping centers, with more than two-thirds of the population served at shopping centers each month.[96] By vote or by dollar, that’s a solid American majority. Shopping centers and malls, therefore, are the societal temples that gather the faithful.

The Mall of America is one of the largest malls in North America. Located in Bloomington, Minnesota, it is part of the Minneapolis-St. Paul metroplex, the sixteenth largest metropolitan area in the country. Fully enclosed, it boasts stores (mostly large national retailers), restaurants and bars, an aquarium, movie theaters, a roller coaster, a mirror-maze fun house, miniature golf, a comedy club, and other attractions. It claims close to forty million visits per year—more than the Grand Canyon, Graceland, and Disney World combined.[97] Historian James Farrell traces the first modern (completely indoor, climate-controlled) mall to the nearby Twin Cities suburb of Edina. Shopping malls previous to Edina’s Southdale were extraverted, with storefront windows facing the parking lots and connecting pedestrian walkways. Opened in 1956, Southdale was entirely different: it offered stores on two levels, all under one roof, with a communal area inviting shoppers and guests to gather at the middle heart of the complex. It was designed by Austrian-born architect Victor Gruen and developed by the Dayton Company (now the Target Corporation). Trees, fishponds, exotic colored caged birds, gardens under a bright skylight—Southdale was described at its grand opening as a “pleasure-dome-with-parking” by Time magazine. The Jewish Gruen, a fervent socialist who escaped Vienna when Hitler’s Germany annexed Austria in 1938, worked on forming contacts in America and quickly made a name for himself, working first in New York City and then opening his own architecture firm in Los Angeles in 1951. Gruen’s vision was pioneering; he believed that the more time people spent in a particular commercial environment, the more money they would spend there. Thrift, frugality, and prudence were challenged in Gruen’s commercial world; splashing fountains, spiraling sculptures, and escalators enticed and seduced consumers to spend both time and money. A Gruen biographer, Jeffrey Hardwick, says that because of Gruen, shopping “has become a distracting and fulfilling experience, a national pastime.”[98]

Gruen designed more than malls; his passion was urban revitalization and he understood his mall designs, even though most were created for suburbs, to be subjugated to civic renewal. He envisioned a combination of commercial and civic participation through his designs that eventually directly impacted more than two hundred American cities and suburbs. White flight from large cities to new suburbs was prevalent in the 1950s; Gruen sought to create community with his malls, injecting commercial and civic gathering places into otherwise fledgling, monolithic suburban constructs. He had larger plans for Southdale that did not come to fruition. Apartment dwellings, schools, medical facilities, and public parks were also part of the original Southdale design; the intent was to recreate what had been lost as people, largely whites, abandoned the cities. Historian Lizabeth Cohen, in her book A Consumers’ Republic, reveals a photograph of presidential candidate John F. Kennedy in 1960 addressing an all-white crowd at the relatively new Bergen Mall in suburban Paramus, New Jersey. This is what Gruen had in mind—he wanted to unite Americans, giving opportunity for richer public life, and to aggrandize retail profits. Both could coexist in his visioning.[99] And for some time, he was correct. A U.S. News and World Report survey in the early 1970s discovered that Americans spent more time at malls than anywhere else, besides work and home.[100] Work and related consumption—with time at home to recuperate and recharge—defines America’s true functional religion.

Gruen’s “shopping towns,” attempting to bring the best qualities of urban life to the suburbs while leaving out undesired aspects of downtown life, reflected the socioeconomic and racial exclusion inherent to the mostly white suburbs. While downtown urban commercial districts were accessible for local pedestrians, those commuting by car found that parking was problematic (with delivery trucks, for example, competing for precious space); for both pedestrians and commuters chaos reigned, security was dubious, and supply varied. A suburban mall, with centralized administration and mandatory tenant cooperation, ample parking and hired security guards, and specified truck delivery areas, by its very designbecame restricted (and efficient) public space. Downtown regulars, like vagrants, prostitutes, racial minorities, and the poor, were excluded from suburban malls. While this type of exclusion was beneficial to business—racial exclusion being culturally acceptable in the 1950s—a certain caste was set that has hindered some malls ever since. Highland Mall was the first regional mall built in Austin, Texas, and is emblematic of this problem. Constructed in 1971 on the city’s north side, Highland Mall began to experience economic decline in the early 2000s. In the decades since it opened the demographics of the surrounding neighborhoods have changed, and today Highland Mall is frequented more often, but not exclusively, by Austin’s African American community. Every spring the University of Texas at Austin hosts the Texas Relays, one of the premier track and field events in the country. Due to the large number of African American event participants, the relays are also a social gathering and destination event for African Americans in general. Highland Mall, due to its proximity to the university, had become a local gathering area for young African Americans during the relays. In 2009, after two consecutive Texas Relay weekends of tension between mall tenants and visitors (in 2007 and 2008), Highland Mall officials determined that the mall would close early on the weekend of the event, effectively barring young African Americans from congregating there. Predictably, many sectors within the Austin community, not least its African American community, decried the decision. Highland Mall, previously in decline, has consequently experienced further decline since the 2009 strife.[101] Highland Mall’s Texas Relays controversy is not directly attributable to Gruen, yet, at the same time, its manifestation is not surprising. It was fueled, in part, by a model and design that had been forged before civil rights era sensibilities were widespread. Malls have been intended, from their beginnings, for a certain clientele. Upper- and middle-class whites, with their children, were first on that list.

Would there have been a more apropos place than the mall for Santa to inhabit?[102] Santa, the very embodiment of consumption’s blessings for the youngest members of society, has for three generations gathered the faithful at America’s malls in the Christmas season. Why not reinforce the value of consumption at the place it can be fulfilled? The domestication of misrule moves forward, as the bearded and bellied commercial icon par excellence looks into the eyes of a child and all but promises her that her material dreams will be fulfilled. Her parents, doing their part to fulfill her dreams by shopping at the mall’s stores, reinforce the image of Santa as the most appropriate icon for an affluent society.[103]

Farrell describes our malls as “cathedrals of consumption” and Santa as our national patron saint. The American Santa, unlike his European predecessors, does not have any religious associations—no religious robe, mitre, or staff. His belly is evidence of his self-indulgence; Farrell calls him a “symbol of material abundance and hedonistic pleasures.” Yet, he does have a religious aura; divinely supernatural and omniscient, he knows about all our activities (if we’ve been good or bad) and gives pleasing gifts to the deserving. And, he is chill; he’ll laugh off our indiscretions and with a twinkle in his eye, give his divine-like blessing on our American Christmas. Santa was present in the department stores for the Christmas season in the pre-mall era, but truly hit his stride in the post-World War II consumerist age. The Christmas holiday season, presided over by Santa, is the high holy season for the market. More Americans exchange gifts during the season than make religious observance; it’s common knowledge that many retailers make the bulk of their profits during the winter holiday season. Malls serve as the official sponsors of the American Christmas.[104]

At two years of age American children are able to name certain products. At four years of age, they begin to evaluate products for their relative worth, and at six years of age, they are able to internalize the idea that better brands cost more. In comparison with other kids throughout the world, American kids aren’t as good as they used to be in math and science. But fear not and don’t fret: American kids are really good at consumption. Farrell is insightful: “Kids learn the pleasures of consumption before they learn any of its costs.” Christmas, the preeminent public ritual of consumption in American life, serves as a type of home schooling for American youth in the arts and ways of consumption. In his tenth year of living, a typical American kid will make 270 store visits. He is being educated in the ways of consumer culture, for better and for worse. When he becomes a teenager, he understands advertising to be popular culture. Getting an iPad for Christmas is truly “getting lost in the things we love” (quoting the iPad TV commercial), because adolescents do understand that receiving and having an iPad is a status and ego enhancer.

The mall, besides being the place to be during the Christmas season, is the place to be during the rest of the year as well. As we saw before, malls are not welcome territory for beggars, drug dealers, or gang members (real or alleged). For parents of youth a mall, by design, is an oasis of security. A parent can feel relatively safe about her child going to the mall. It’s an enclosed area, and no prohibitive activities are sponsored at the mall. Like churches, malls offer interpretations of the good life.[105] The architecture inside the mall is uplifting, the lights are bright, and the material items presented for consumption are desirable. This is the good life, and the mall offers an edenic vision for those who believe. If you have enough credit or ready cash, the mall, like a church, can be understood to be a place that points to a type of salvation.

But, alas, modern malls are aging. The proliferation of mall construction in the United States—coinciding with the advent of credit cards and the era of instant gratification—reached its apogee not long after it started. Growing bountifully in the 1950s and early 1960s and populated mostly by whites, like country clubs and a number of suburban churches, their day in the sun has faded. The International Council of Shopping Centers lists some 1,100 enclosed regional malls in the United States; a third are doing well, another third are dealing with difficulties, and the final third (those like Highland Mall) are in financial distress or in the process of closing.[106] While construction of new malls today in the United States is almost unheard of, numerous American-style malls are currently being constructed in China, India, Dubai, and other regions throughout the world.[107] Globalization includes exporting the American version of the consumerist religion. China boasts the two largest malls (by leasable space) in the world. The New South China Mall in Donggaun and Golden Resources Mall in Beijing—twice and one and a half times as big, respectively, as the Mall of America—are not experiencing the business activity that had been anticipated. The Donggaun mall is described as a ghost town at less than 20 percent occupation by retailers. The Beijing mall’s developers might have overstepped with their advertising and propaganda: “The mall that will change your life.”[108] Back in America, where the mall was created and established, many malls are being redone or updated if not being closed down and abandoned altogether. The Mall of America is now twenty years old; its luster is fading and its restoration and revitalization necessary if it is to remain a player in the American commercial scene. Seniors walking a mile or two in the malls’ climate-controlled environs and leaving without spending money—Gruen didn’t foresee this—won’t keep J. C. Penney, Gap, Old Navy, and Finish Line in the black.

Shopping centers and big box stores, however, are saving the day and filling the void created as some malls are pushed aside. The Internet, also, has carved out its share of the commercial transaction pie, with ample expansion expected. The Internet is a virtual mall and then some. Its individualistic nature perfectly fits younger consumers who have been raised in the ethos and aura of individualism. What will the future hold? We are told that there will be no limit to the commercial expansion possible in a freely open-market system—the pie will only get larger. Heightened consumption, the storyline continues, is only natural in an ever-expanding and unlimited system. British theologian Peter Sedgwick hints at twenty-first century, post-mall Britain and America where, ironically, mall-style consumption upsets the social and ethnic stratification suburban malls helped solidify in the first place: “Identity in today’s society . . . is no longer given by ethnicity, class, gender or social status. People find out who they are or want to be, by consumption.”[109] Religion is function and belief; consumption, be it at big box stores, shopping centers, malls, or via the Internet, bridges the gap between function and belief. We’ve come to the place where we believe that we are what we buy.

 

Is It Worth It?

The good old American Dream—which in more egalitarian days spoke of opportunity—is manifested today as consumption: get a job (or inherit money or win the lottery) and start accumulating stuff, things, and whatnots. Due to decreased economic mobility, separate groupings exist for those chasing the dream. For those born into favorable economic conditions, start accumulating stuff is amended to continue accumulating stuff. For those not born into favorable circumstances—best of luck, because your journey (if pursued) toward greater wealth and accumulation continues to be hindered, as it has been for forty years now, by the ideology of excessive self-reliance at the cost of the ethic of compassion. Opportunities that had previously existed for many now exist only for a small minority. Democracy is potentially diminished by market systems that produce inequalities of income and wealth.[110] The market system as we know it, especially in its recent history, is not an entity best left to its own devices. As we’ve seen, the market system is fully able to incorporate racism, sexism, other prejudices, and blatant injustices. It will play favorites, and it can be manipulated for one’s own unrighteous gain at the expense of others. These unfortunate realities are not always rooted out of the system as “inefficiencies”; as long as there are those who, because of greed or other self-serving reasons, advocate and strive for them, these injustices will be market realities.

In a recent worship service at the dual-language congregation I serve, during the message time we engaged in a conversation about the anxiety caused by living in an overtly consumerist society. (Anxiety and depression are recognized as the most common mental illnesses in the United States.)[111] Specifically, this conversation involved a number of first-generation immigrant Latinos who are adroitly able to make comparisons, economic and otherwise, between the societies they left behind in Latin America and the one they live in now. One of our members, originally from Guatemala, spoke of the weariness she experiences in trying to keep up, working two jobs, and helping to care for her family, now blessed with four grandchildren. “When I look in the mirror, and I see this person who is tired and aging, I ask myself: Is it worth it?” (my translation). Her husband immediately reassured her that her aging face was still beautiful, but her question struck a chord with all those present.

Work is a great opportunity in the United States. We’re thankful for it even as it saps our energy and youthfulness. But, does work always deliver on its promise to take care of us? Whom does our work benefit—ourselves and our community, or are we unwittingly part of some larger design where our contributions are parasitically annexed for someone else’s gain? Is the pace that we keep with our work one that gives freedom or creates bondage? Increasingly, our rates of consumption with their propensity toward excesses speak of bondage—exorbitantly so. Americans have 1.3 billion credit cards (four for every man, woman, and child) while our savings rate continues to plummet to nearly net zero.[112]

If hunter-gatherer societies experienced what we today call leisure, how far have we come since then? The postindustrial age promised leisure in abundance because of mechanical and technological advances: electricity, indoor plumbing, automobiles, washing machines, dryers, stoves and ovens, freezers, blenders, lamps, and so many other conveniences were to make our lives freer and easier. Somehow, the promise is not fulfilled. Yes, without question, most of us are better off than our ancestors. But the ailments and deficiencies of previous ages have been replaced by modern ones—many related to our frenzied belief that more is always better.

The true religion—ultimate concern—of American society is found and based in the confluence of commerce, materialism, and consumption. It’s been a good religion that has taken care of many of us very well; millions of lives have been lifted up from poverty and provided with food, shelter, clothing, and further material blessing. (It has simultaneously destroyed a good many as well.) There is great purpose in work, and oftentimes our toil, ingenuity, and perseverance in work have served the human family admirably. But this religion—especially in the last one hundred years of Rockefeller’s new permission—has pushed to pierce the soul, individually and collectively, adulterating our spirits with its creed of more is better. Truth be told: more is not always better.

 

Endnotes

[1] Dave Jamieson, “Join the Booming Dollar Store Economy! Low Pay, Long Hours, May Work While Injured,” August 9, 2013, Huffington Post website, http://www.huffingtonpost.com/2013/08/29/dollar-stores-work_n_3786781.html, retrieved September 3, 2013; Kent Patterson, “Activists Accuse Family Dollar Stores of Anti-Labor Practices,” November 8, 2010, Texas Civil Rights Project website, http://www.texascivilrightsproject.org/2644/activists-accuse-family-dollar-stores-of-anti-labor-practices/, retrieved September 3, 2013; US Department of Labor document, http://www.dol.gov/whd/regs/compliance/fairpay/fs17a_overview.pdf, retrieved September 3, 2013; Kent Patterson, “Dollar Stores: Top Link in Sweatshop Chain,” October 6, 2010, CorpWatch website, http://www.corpwatch.org/article.php?id=15629, retrieved September 3, 2013; Jack Hitt, “The Dollar Store Economy,” August 18, 2011, The New York Times Magazine website, http://www.nytimes.com/2011/08/21/
magazine/the-dollar-store-economy.html?pagewanted=all&_r=1&, retrieved September 3, 2013.

[2] Richard Donkin, Blood, Sweat and Tears: The Evolution of Work, Texere (2001),
46–47.

[3] Texas empresario (Spanish for entrepreneur) Stephen F. Austin had a telling encounter with the one of the chiefs, Carita, of the Tonkawas. Austin attempted to convert the Tonkawa from their traditional ways of hunting and gathering (or stealing, according to the empresario). Austin presented the chief with farming implements and seed corn, in the process securing Carita’s promise that they would clear land to settle and farm it. Not surprisingly, after Austin left, the Tonkawa simply ate the corn. When Austin later returned to check on the tribe’s progress, Carita informed him that the Great Spirit had enlightened the Tonkawa: they were to keep to their traditional ways—hunting and gathering—and not become “sedentary” like white men. Greg Cantrell, Stephen F. Austin: Empresario of Texas, Yale University Press (1999), 140.

[4] As of 2010, immigration, legal and illegal, has slowed considerably from Mexico into the United States. The main culprit is the economic downturn since 2008. Mexicans come to the United States to work, and whenthere isless work to be had, migration slows. More favorable economic conditions in Mexico along with a declining Mexican birthrate are additional factors. Juan Castillo, “Study: Net immigration slows to near standstill,” Austin American-Statesman, April 24, 2012, A1.

[5] Economic Policy Institute, “Work and Leave Policies,” February 14, 2011, State of Working America website, http://stateofworkingamerica.org/charts/comparison-of-total-statutory-leave-time-and-average-weeks-worked-per-year/, retrieved April 8, 2012; Catherine Rampell, “Koreans Put in the Most Hours,” May 12, 2010, The New York Times website, economix.blogs.nytimes.com/2010/05/12/s-koreans-put-in-most-hours/, retrieved April 8, 2012.

[6] Gar Alpoveritz, America Beyond Capitalism: Reclaiming Our Wealth, Our Liberty, and Our Democracy, Wiley & Sons (2005), 197–98; Chrystia Freeland, Plutocrats: The Rise of the New Global Super-Rich and the Fall of Everyone Else, Penguin Press (2012), 86; John De Graaf, David Wann, and Thomas Naylor, Affluenza: The All-Consuming Epidemic, Berrett-Koehler Publishers (2001), 42.

[7] Marshall Sahlins, Stone Age Economics, Aldine Transaction (1974), 1–2; Donkin, 4.

[8] Donkin, 17, 25–26.

[9] Genesis 3:17–19.

[10] Ibid., 32.

[11] Ibid., 30; Bell, xvi.

[12] Ibid., 62–66.

[13] Ibid., 73–77.The name Luddite came from Ned Ludd, perhaps a legendary figure, who was said to have smashed an earlier version of a knitting frame in a fit of passion in the 1780s, a generation before the uprising bearing his name.

[14] Stephen Nissenbaum, The Battle for Christmas, Random House (1996), 5–9.

[15] Ibid., 62, 84.

[16] Donkin, 79.

[17] Ibid., 79–81.

[18] Kathleen Strange, Climbing Boys: A Study of Sweeps’ Apprentices, Allison & Busby, (1982), 31.

[19] Donkin, 82.

[20] Morris, 272.

[21] R.W. Tawney, Religion and the Rise of Capitalism, Harcourt, Brace & World (1926), 231, 234–35.

[22] Donkin, 130.

[23] Miller, 224–40; Donkin, 122–31.

[24] Donkin, 147–52.

[25] Chernow, 611–12.

[26] Days, nights, and seasons have rhythmically paced humanity’s sense of time. The mechanized clock of the industrial era, by accurately dividing time into units, has made possible the coordinated activity of workers and accompanying commercialization. Whybrow, 158, 238.

[27] Donkin, 138–44.

[28] Morris, 301–14.

[29] Donkin, 158.

[30] Aldous Huxley, Brave New World, Harper Perennial Classics (1932), 4, 33, 50, 92.

[31] Echoing Upton Sinclair’s assessment of the plight of meatpackers of the early 1900s in The Jungle.

[32] Donkin, 215–16.

[33] Naomi Klein, No Logo: Taking Aim at the Brand Bullies, Picador (2000), 200–05, 212.

[34] Jemima Kiss, “The real price of an iPhone 5: life in the Foxconn factory,” September 13, 2012, The Guardian website, http://www.guardian.co.uk/technology/2012/sep/13/
cost-iphone-5-foxconn-factory, retrieved September 15, 2013.

[35] Arun Devnath, “Bangladesh Building Collapse Death Toll Reaches 1,000,” May 9, 2013, Bloomberg website, http://www.bloomberg.com/news/2013-05-09/fire-at-bangladesh-garment-factory-kills-at-least-7-people.html, retrieved September 15, 2013.

[36] US Department of Labor, Bureau of Labor Statistics website, http://data.bls.gov/pdq/
SurveyOutputServlet, retrieved April 16, 2012.

[37] Eric Schlosser, Fast Food Nation: What the All-American Meal is Doing to the World, Penguin (2002), 149–54.

[38] US Department of Labor, Bureau of Labor Statistics website, http://www.bls.gov/
news.release/union2.nr0.htm, retrieved January 2, 2014.

[39] Schlosser, 154–64.

[40] Ibid., 169–74.

[41] Ted Genoways, “Cut and Kill,” Mother Jones, July–August 2011, 27–37, 67.

[42] Tawney, 218–24.

[43] 2 Thessalonians 3:7, 10.

[44] Dorrien, 341.

[45] Alexander Thomas, “Ronald Reagan and the Commitment of the Mentally Ill: Capital, Interest Groups, and the Eclipse of Social Policy,” Electronic Journal of Sociology (1998), www.sociology.org/content/vol003.004/thomas_d.html, retrieved April 22, 2012.

[46] Ibid.

[47] Yves Smith, 109; “How Many People Experience Homelessness?”, July 2009, National Coalition for the Homeless website, www.nationalhomeless.org/factsheets/
How_Many.html, retrieved April 22, 2012; Phillips, xviii.

[48] Tavis Smiley and Cornel West, The Rich and the Rest of Us: A Poverty Manifesto, SmileyBooks (2012), 17, 24–25, 28–29.

[49] Ahamed, 441.

[50] Paul Krugman, “Who Was Milton Friedman?”, February 15, 2007, The New York Review of Books website, www.nybooks.com/articles/archives/2007/feb/15/who-was-milton-friedman/?pagination=false, retrieved April 23, 2012; Yves Smith, 34–38. Neoliberalism as a term is somewhat disputed. Some maintain that its use is strictly pejorative.

[51] Madrick, 245–46.

[52] Yves Smith, 292.

[53] Reinhart and Rogoff, 162.

[54] Paul Kedrosky and Dane Stangler, “Financialization and Its Entrepreneurial Consequences,” Kauffman Foundation Research Series, March 2011, 2–3, www.kauffman.org/uploadedFiles/financialization_report_3-23-11.pdf, retrieved April 7, 2012.

[55] Andrea Seabrook, “Who’s Weighing Tax on Rich? Congress’ Millionaires,” September 20, 2011, National Public Radio website, www.npr.org/2011/09/20/140627334/
millionaires-in-congress-weigh-new-tax-on-wealthy, retrieved June 4, 2012.

[56] Nomi Prins, It Takes a Pillage: An Epic Tale of Power, Deceit, and Untold Trillions, Wiley (2009), 141–42.

[57] So does his wife, Wendy Gramm, who holds a PhD in economics (as does Gramm himself). Mrs. Gramm chaired the US Commodity Futures Trading Commission from 1988 to 1993. After helping to push through a ruling of the commission to exempt energy futures contracts from regulation—an Enron request—she promptly resigned her chair with the CFTC. Five weeks later she was appointed to Enron’s board of directors. Her Enron pay and perks totaled somewhere between $1 and $2 million from 1993 to 2001. Her official duties on the Enron board of directors included service, sadly amusing after the fact, on the audit committee. David Corn, “Foreclosure Phil,” July/August 2008, Mother Jones website, www.motherjones.com/politics/2008/05/foreclosure-phil, retrieved May 15, 2012.

[58] Charles Lindblom, The Market System: What It Is, How It Works, and What To Make of It, Yale (2001), 236, 247–49.

[59] Ibid., 8–15.

[60]This total comes from my own count at a local grocery store in Austin, Texas, on June 7, 2012.

[61] Ibid., 41–42, 47.

[62] Ibid., 10, 65, 237.

[63] Ibid., 168.

[64] Bill Moyers, “Encore: How Big Banks Are Rewriting the Rules of our Economy,” March 16, 2012, Moyers and Company website, billmoyers.com/episode/encore-how-big-banks-are-rewriting-the-rules-of-our-economy, retrieved May 10, 2012.

[65] Katrina Brooker, “Citi’s Creator, Alone With His Regrets,” January 2, 2010,
The New York Times website, www.nytimes.com/2010/01/03/business/economy/
03weill.html?pagewanted=all, retrieved May 10, 2012.

[66] Madrick, 309–13.

[67] Moyers, “Byron Dorgan on Making Banks Play by the Rules,” March 16, 2012, Moyers and Company website, http://billmoyers.com/segment/byron-dorgan-on-making-banks-play-by-the-rules/, retrieved May 10, 2012; Madrick, 313–15.

[68] Nomi Prins and Andy Kroll, “9 Wall Street Execs Who Cashed in on the Boom—and the Bust,” November 7, 2011, Mother Jones website, www.motherjones.com/politics/
2011/11/9-wall-street-execs-who-got-off-scot-free, retrieved May 13, 2012.

[69] Weill has since seen another type of light: he recently advocated breaking up the big banks as a way of restoring confidence and profitability in the banking system. “Wall Street Legend Sandy Weill: Break Up the Big Banks,” July 25, 2012, CNBC website, http://www.cnbc.com/id/48315170, retrieved September 14, 2013.

[70] Madrick, 317; Moyers, “John Reed on Big Banks’ Power and Influence,” March 16, 2012, Moyers and Company website, http://billmoyers.com/segment/john-reed-on-big-banks-power-and-influence/, retrieved May 10, 2012.

[71] Phillips, 154.

[72] Thomas Frank, One Market Under God: Extreme Capitalism, Market Populism, and the End of Economic Democracy, Anchor (2000), xiv; Whybrow, xvii, 260.

[73] Quoted in Frank, 87.

[74] Rush Limbaugh, a prominent dogmatist of market fundamentalism, went so far as to call the teachings of Pope Francis “pure Marxism” in November 2013. The newly appointed Pope, in Evangelii Gaudium, criticized the “crude and naïve trust in the goodness of those wielding economic power and in the sacralized workings of the prevailing economic system.”The Pope was simply espousing long-held church teaching. Limbaugh purposely uses loaded terms like “Marxism” and “socialism” to sway opinion. He, and many others who loosely and pejoratively use the terms, often confuse the terms with egalitarianism. Rush Limbaugh radio archive website, http://www.rushlimbaugh.com/daily/2013/11/27/it_s_sad_how_wrong_pope_francis_is_unless_it_s_a_deliberate_mistranslation_by_leftists retrieved December 8, 2013; Vatican Press document, http://www.vatican.va/holy_father/francesco/apost_exhortations/documents/papa-francesco_esortazione-ap_20131124_evangelii-gaudium_en.pdf, retrieved December 8, 2013.

[75] Lindblom, 224.

[76] William T. Cavanaugh, Being Consumed: Economics and Christian Desire, Eerdmans (2008), 16, 32.Shopping addicts: upward of 10 percent of society, including 20 percent of women.

[77] Ibid., 32, 36.

[78] As further support for the argument, consider the well-documented sense of malaise and loss of identity and sense of purpose that oftentimes affects American men upon retirement.

[79] David Loy, “The Religion of the Market,” Journal of the American Academy of Religion, 1997, vol. 65, issue 2, 275.

[80] Yves Smith, p. 42–44; George Akerlof and Robert Schiller, Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism, Princeton University Press (2009), 1, 45, 168.

[81] Economist Ha-Joon Chang says economics is a “political exercise.” Chang, 10.

[82] Loy, 277–78.

[83] Akerlof and Schiller, xi, 146.

[84] Karl Polanyi, The Great Transformation: The Political and Economic Origins of Our Times, Beacon Press (2001), 60.

[85] Barbara Ehrenreich, Nickel and Dimed: On (Not) Getting By In America, Henry Holt and Company (2001), 119–20.

[86] Quoted in James Childs, Greed: Economics and Ethics in Conflict, Augsburg Fortress (2000), 6–7, 141.

[87] Matthew Taibbi, Griftopia: Bubble Machines, Vampire Squids, and the Long Con That Is Breaking America, Spiegel & Grau (2010), 29.

[88] “Social Mobility and Inequality: Upper Bound,” April 15, 2010, The Economist website, www.economist.com/node/15908469, retrieved May 18, 2012.

[89] Richard Wilkinson and Kate Pickett, The Spirit Level: Why Greater Equality Makes Societies Stronger, Bloomsbury Press (2009), 157–63.

[90] “Pew Commissioned Poll Finds Americans Optimistic About Prospects for Economic Mobility Despite the Recession,” March 12, 2009, The Pew Charitable Trusts website, www.pewtrusts.org/news_room_detail.aspx?id=50022, retrieved May 20, 2012.

[91] Dorrien, 146, 166.

[92] Childs, 126.

[93] Tawney, 221–22.

[94] Frank Deford, “Seasons of Discontent,” Newsweek, December 29, 1997.

[95] Michael Sandel, What Money Can’t Buy: The Moral Limits of Markets, Farrar, Straus, and Giroux (2012), 173–74.

[96] James Farrell, One Nation Under Goods: Malls and the Seductions of American Shopping, Smithsonian Exposition Books (2004), xi; International Council of Shopping Centers website, www.icsc.org/srch/faq_category.php?cat_type=research&cat_id=3, retrieved May 22, 2012.

[97] Paco Underhill, The Call of the Mall: The Author of ‘Why We Buy’ on the Geography of Shopping, Simon & Schuster (2004), 21.An argument can be made for Disney World to be considered an American “place of worship,” according to our functional definition of religion. Like newer sports stadiums, it also is in the category of cathedral. Its prohibitive costs, however, for a large number of Americans negates its inclusion.

[98] Malcolm Gladwell, “The Terrazo Jungle,” March 15, 2004, The New Yorker website,www.newyorker.com/archive/2004/03/15/040315fa_fact1, retrieved May 17, 2012; Jeffrey Hardwick, Mall Maker: Victor Gruen, Architect of an American Dream, University of Pennsylvania Press (2004), 3–5; Farrell, 8–11.

[99] Lizabeth Cohen, A Consumers’ Republic: The Politics of Mass Consumption in Postwar America, Vintage (2004), 337; Hardwick, 5.

[100] Underhill, 14.

[101] Wells Dunbar, “Highland Mall Hysteria Over Texas Relays,” April 10, 2009,
Austin Chronicle website, www.austinchronicle.com/news/2009-04-10/764651/, retrieved May 25, 2012.

[102] Santa’s first appearance in the Macy’s Thanksgiving Day Parade was in 1924.

[103] Farrell, 122.

[104] Ibid., 119, 122, 124, 129, 132, 133.

[105] Ibid., 76, 80, 81, 86, 90, 99, 108, 109.

[106] Karen Stabiner, “New Lives for ‘Dead’ Suburban Malls,” The New York Times, January 21, 2011, http://newoldage.blogs.nytimes.com/2011/01/21/new-lives-for-dead-suburban-malls/, retrieved May 25, 2012.

[107] The September 2013 attack on the Westgate Mall of Nairobi by the Somali terrorist group al-Shabaab can be seen as a clash of religions—jihadist Islam against Western consumerism.

[108] Johan Nylander, “World’s biggest mall a China ‘ghost town’,” March 3, 2013, CNN website, www.edition.cnn.com/2013/03/03/business/china-worlds-largest-mall, retrieved May 28, 2013; Robert Marquand, “China’s supersized mall,” November 24, 2004, Christian Science Monitor website, www.csmonitor.com/2004/1124/p01s03-woap.html, retrieved May 28, 2013.

[109] Peter Sedgwick, The Market Economy and Christian Ethics, Cambridge University Press (2008), 109.

[110] Lindblom, 236.

[111] Wilkinson & Pickett, 33–36.

[112] Akerlof & Schiller, 128–29.

 

ta jalbm
T. Carlos “Tim” Anderson

T. Carlos “Tim” Anderson is a bilingual Protestant minister in Austin, Texas who has previously lived and worked in Chicago, Houston, and Lima, Peru. For copies of Just A Little Bit More, interview requests, and other inquiries, contact T. Carlos “Tim” Anderson at the Blue Ocotillo Publishing website, www.blueocotillo.com.

 

For breaking news daily and to stay informed about exciting Consortium events and fresh essays and articles posted at www.povertyconsortium.org, join us on twitter at @bostonpovertyc!