How do you make a case against capitalism while appearing to defend consumers’ rights and values? You make a movie called The Social Dilemma.
The movie is cleverly done. It purports to oppose manipulation by Big Tech of social media users, calling out advertisers who manipulate people for profit. At the same time, the movie engages in its own manipulation. How does it do so? To quote Elizabeth Barrett Browning, “let me count the ways.”
State the credentials only of the people on your side
Throughout the ninety-four-minute movie, various commentators argue that social media have done great harm. In every case but one, the commentators criticize social media, warning us of its many harms. The movie states quite prominently, without exception, the credentials for all the negative commentators, and the credentials are impressive. The main commentator throughout is Tristan Harris, identified as a former design ethicist at Google and also as president of the Center for Humane Technology. Another commentator is Sandy Parakilas, identified as a former platform operations manager at Facebook and a former product manager at Uber. Yet another is Justin Rosenstein, whom the movie identifies as a major player at Google and then Facebook. A fourth is Shoshana Zuboff, an emeritus professor at Harvard Business School and author of The Age of Surveillance Capitalism. That’s not a complete list.
In the whole movie, only one person expresses skepticism about the idea that manipulation by social media is sui generis. He expresses this view at a panel in which he challenges the aforementioned Tristan Harris. This skeptic points out that newspapers and print media also played on people’s addictions and ability to be influenced. He notes that when television came along, it did so as well, but in different ways. This, according to the skeptic, is just the next thing.
Here’s what’s most interesting about this skeptic. Only because I’m an economist do I know who he is. “That’s Kevin Murphy,” I said to my wife, who was watching the movie with me. Who’s Kevin Murphy? You wouldn’t know from watching the movie. You had to pay close attention even to know it was Kevin Murphy. I had to pause and rewind and only then did I notice that he had a name card in front of him. Probably not one viewer in fifty notices that, and probably not one viewer in a thousand knows who he is. So let me tell you. Kevin M. Murphy is a star economist at the University of Chicago. He won the John Bates Clark Medal in 1997, given in those days only once every two years to the most outstanding American economist under age forty. He’s the only business school professor ever to win a MacArthur genius award. But the movie tells you none of that.
That’s how the movie deals with controversy: allow only one person to challenge the narrative and don’t even tell the viewer who he is. The basic narrative is that Facebook, Google, and other social media manipulate us. But when it comes to manipulation, those media have nothing on the makers of The Social Dilemma.
Hint at the problem without ever showing the problem
The bad actors in the movie’s narrative are advertisers and the wealthy social media firms. At one point in the movie, Parakilas states, “It’s not like they’re [the social media companies] trying to benefit us. Right? We’re just zombies and they want us to look at more ads so they can make more money.” What’s the problem with that? You might think in a standard-length movie, the critics would try to say why. Here’s the amazing thing: they don’t.
So let’s fill in the missing reasoning. Think about why companies would pay social media firms to advertise. It’s to get people to buy their products. If advertising on social media were seen as completely ineffective, companies would pay precisely zero for advertising. The fact that they keep paying and that social media companies get rich by selling advertising, month in, month out, means that advertising is effective.
Wouldn’t you want the critics in the movie to then point to how advertising manipulates our tastes for products, causing us to buy things we don’t “really” want? Amazingly, they don’t.
The closest the movie comes to making a case is near the end of the movie, when Rosenstein states:
Corporations are using powerful artificial intelligence to outsmart us and figure out how to pull our attention to what they want us to look at, rather than the things that are most consistent with our goals and our values and our lives.
But why would they do that? Isn’t it easier to sell us things that are consistent with our goals, our values, and our lives?
The critics point out numerous times that the companies are continuously refining their algorithms to learn more and more about you. They imply that that’s bad without ever saying why. There’s an old saying whose origin is unknown that goes as follows: “Half the money I spend on advertising is wasted, and the trouble is I don’t know which half.”
I think we can all agree that waste is bad. So isn’t it good rather than bad that advertisers and social media are continually honing their tools to put in front of your eyes items that you really have a high probability of buying? They aren’t there yet. Sometimes when I Google an item I’m thinking of buying, within what seems to be minutes an ad for that item shows up on my Facebook page. It’s almost always mildly annoying, either because the advertised item is a brand I don’t want or because I was just exploring and have decided that I don’t want that item at all. Which means that not half, but perhaps 90 percent, of that advertising was wasted on me.
Some people find it creepy that advertisers know so much about us. I don’t, although I understand the feeling. But think back to how advertisers tried to reach us before social media existed. Imagine that you live in a Jewish household. Which kind of advertising by mail would you dislike more: mailers premised on the assumption that you’re Jewish or mailers premised on the assumption that your household is Muslim, Catholic, or Buddhist?
Make up history
In one segment of the movie, Harris contrasts social media with previous innovations, claiming that no one objected to bicycles on the grounds that those who used them would spend less time with their families. Neither he nor the movie presents any evidence for his claim. But here’s what I found with just a little search (on Google, by the way) about early attitudes toward the bicycle. In a 2001 book titled The Ride to Modernity: The Bicycle in Canada, 1869–1900, author Glen Norcliffe quotes an essay by Heather Watts on early cycling in Nova Scotia. Watts writes:
At a time when higher education, women’s suffrage, and the movement for dress reform were all topics of heated discussion, the bicycle became one more liberating influence on the restricted lifestyle of Victorian women. . . . This element of freedom and independence greatly appealed to women. They were no longer left at home, but could go on outings with their women friends or accompany their young man on an equal basis. Once tasted, the new freedom was hard to abandon.
If bicycling was a liberating force for women, and if women were able to ride with their women friends, is there much doubt about whether some critics at the time claimed that bicycling women would spend less time with their families?
Play up the downside of social media with little attention to the upside
I do think the movie scored a direct hit on a huge downside of social media: the purported effects on people between the ages of ten and nineteen. It presented some disturbing data about the effects on young girls, especially those ages ten to fourteen, of media such as Instagram that encourage them to compare their faces and bodies with what seem to be regarded as ideal body types. Here are two shocking statistics about what has happened since 2011, when Facebook and other social media had become widespread: the number of girls aged ten to fourteen per 100,000 who are admitted to hospitals for cutting themselves or harming themselves in other ways has risen 189 percent, and the number of girls aged ten to fourteen per 100,000 who have committed suicide has risen 151 percent.
One critic, Jonathan Haidt, a psychology professor at New York University’s Stern School of Business, gives a straightforward solution: set an age below which your child is not allowed to use social media and limit your child’s use of such media. To say it’s straightforward is not to say it’s easy. It’s probably hard, but parenting is hard.
To their credit, the critics do mention some upsides to social media. Harris says you can get on your smartphone and have a car show up quickly. Critic Tim Kendall, identified as the former president of Pinterest, notes that social media have helped people find long-lost relatives and organ donors. That’s pretty big.
But there are many more upsides. I can find some half-forgotten poem from high school when I remember only one sentence. That happened just last month with this poem. We can check a fact, we can follow friends, not just family, with whom we had lost touch, and we can compare airfares and make airline reservations in minutes, without either the use of a travel agent or even a phone call. I’ve just scratched the surface.
Discuss the upside as if it’s the downside
Critic Bailey Richardson, an early team member of Instagram, says that when the Internet first started, it was a weird, wacky place with lots of creativity. She recognizes that creative things still happen on the Internet, but now, she says, it feels like a “giant mall.”
That’s bad? Some people whom I’m close to have restricted diets because of various ailments. For them, shopping online has been wonderful. One person in particular needs to keep gluten out of her diet. And she is able to find appealing, tasteful, gluten-free items online much more easily than if she had to shop in her semi-urban, semi-rural part of the country. Imagine if she lived in, say, rural Nevada. Shopping online could be a godsend.
Attack wealth when it’s not that of your allies
At many points in the movie, the critics point out disdainfully how wealthy the social media companies are. That raises two questions. First, is there some chance they got that way by making things we want more available? Answer: yes. Second, how wealthy are these critics? Answer: very. Critic and investor Roger McNamee, for example, is a billionaire. Justin Rosenstein’s net worth is $150 million. The other critics are multimillionaires. Honestly earned wealth is not a mark against the wealthy, whether the wealthy be social media critics or social media firms.
Lay out your real agenda, with no evidence, at the end
Near the end of the movie, probably many viewers are hooked. Then we get to what seems to be the agenda of the critics and the movie makers: to end, or highly regulate, free markets.
Zuboff states, “These markets undermine democracy and undermine freedom and they should be outlawed.” The movie director possibly forgot to insert a sentence or two telling us which markets Zuboff is referring to. Whatever markets she wants to end, that would be a major hit on capitalism.
Rosenstein complains that social media corporations go unregulated “as if somehow magically each corporation acting in its selfish interest is going to produce the best result.” One gets the idea that he’s never read Adam Smith, who indeed did arguein The Wealth of Nations that “it is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.”
Rosenstein also says mining the earth and pulling oil out of the ground are bad for humans. He claims as evidence of a warped, for-profit system that trees and whales are worth more dead than alive. And then he jumps the shark, or maybe I should say the whale, by saying “we’re the tree; we’re the whale.” How exactly social media companies kill us and how exactly they gain from dead consumers he leaves as an exercise for the viewer.
Various friends told me that The Social Dilemma would upset me. It did. As noted, I found the facts about young girls very disturbing. The biggest upset, though, is that a bunch of critics and a movie director manipulate viewers into not knowing that there is another side to this debate, understate the benefits of social media, and use the movie as a vehicle for a rant against capitalism. Other than that, the movie was great.
If news writers had any integrity, the headlines following the 2020 election would have read like the one on this column. Instead, the media gods who helped put Joe Biden over the top expound on why Donald Trump‘s protests are without merit and just another example of Republican sore-loserism.
The former vice president’s apparent margin of victory is not all that large. At the time this was written it was about 6 million votes out of about 150 million cast. That works out to about 4 percent and could, depending on recounts, slip lower.
In the states that appear to be making the difference—Arizona, Georgia, Michigan, Pennsylvania, Wisconsin—Biden’s lead is extremely narrow, much as Trump’s was when he defeated former secretary of state Hillary Rodham Clinton to win the White House in the first place. It’s hard to argue the man most of the media anointed the new American chief executive before all the votes were cast has a mandate to do much of anything.
Nonetheless, if he eventually becomes president, the calls for him to act swiftly and decisively will be frequent, loud, and—from his point of view—problematic. In the Thursday, November 19 edition of The Wall Street Journal author and political cartoonist Ted Rall argues forcefully that, without the support of progressives who held their nose and voted for him anyway, Biden wouldn’t be going back to Washington and instead would be headed back to Delaware.
Progressives who would have preferred Vermont senator Bernie Sanders may have pushed Biden past Trump in the popular vote and in the states that will determine the outcome in the electoral college, but on almost every other measure they were defeated. By a small majority, the nation indicated it may not want four more years of Trump, but it’s clearly repudiated the progressive agenda.
The GOP may have lost seats in the U.S. Senate but it’s most likely maintained control. The outcome hangs on two runoffs in Georgia—both of which the Republicans are favored to win—unless an audit of the votes pushes incumbent GOP senator David Perdue back up over 50 percent of the vote, where he was for most of election night. Right now, he’s at 49.71 percent, and the 0.3 percent he needs to avoid a runoff might be overcome just by the uncounted votes being discovered across the state.
The Republicans were also projected to lose seats in the U.S. House of Representatives. Instead, they won all the top targeted races, lost no incumbents seeking reelection, and gained enough seats not only to get above 200—a crucial barrier in the battle for the majority—but to put Speaker Nancy Pelosi‘s ability to control events on the floor in doubt. Enough moderate Democrats are saying privately (and thanks to some propitious leaks, publicly) that they’re not willing to walk the plank for her and the “The Squad” is in for a rough going.
Looking around the country, the Republicans picked up one governorship in 2020 (Montana) and the New Hampshire state legislature. This gives the GOP the prized “trifecta” in each state which, when added to the dozens they already had, means that while Washington is gridlocked the GOP can use states to pass the reforms they’ll take national the next time they have the White House.
At the same time the Democrats, who enlisted the substantial fundraising support of former president Barack Obama and former U.S. attorney general Eric Holder in an attempt to flip legislative chambers to Democratic control, failed everywhere they tried. They may have spent tens of millions or more in pursuit of this goal with nothing to show for it. Contrary to late predictions, the GOP held on to state legislatures in Texas and Arizona comfortably when the battle for control was expected to be a close-run thing. And they held the legislatures in Wisconsin, Michigan, Pennsylvania, North Carolina, Georgia and enough other key states that predictions are already being made that, based solely on the upcoming reapportionment of U.S. House seats among the states, the Republicans are headed to a decade-long majority. No wonder Mrs. Pelosi is saying this is her last term as speaker.
Even at the lawmaking level, progressivism was crushed. Voters in California, who went for Biden over Trump by about two to one, rejected an effort to repeal the 1996 Proposition 209 that prohibits the state from considering race, sex, color, ethnicity or national origin in public employment, education and contracting. At the same, in progressive Colorado, voters said “Yes” to a cut in the state income tax rate from 4.63 percent to 4.55 percent. In Illinois, voters rejected a measure to establish a graduated income tax and in Montana voters limited the ability of local governments to interfere with issuing of “concealed carry” firearms permits.
If there’s one takeaway from the 2020 election, it’s that, despite the aggressive support it received from donors, elected officials, candidates for office and the mainstream media, progressivism is on the decline. Heck, Joe “I am the Democratic Party” Biden even rejected it while debating Donald Trump. The course is set and if the new president—whoever it is—is smart enough to follow it then the sailing should be smooth. If not, it’s stormy weather ahead.
In Washington, bad ideas are like bad pennies: They keep turning up.
In early 2019 a group of well-connected Washington insiders was suggesting with the utmost sincerity that it would be best to have the Pentagon in charge of the push to 5G, the next-level communications network. The primary reason for this, they said, was national security and the threat posed by China.
President Donald J. Trump, a man who is in no way soft on China, wisely rejected their advice. In a Rose Garden press conference with Federal Communications Chairman Ajit Pai, he rejected the government-led approach, calling it “not as good, and not as fast.” Instead, he committed to a 5G buildout that would be “private sector driven, and private sector led,” ending talk of a nationalized network.
Or so we thought. The Wall Street Journal recently reported that the idea of a 5G network run out the Pentagon is once again on the table. A new proposal for a government-managed system under the supervision of a single company is once again under discussion. And, as before, the firm the DoD has in mind has little to no experience managing large information clusters.
The reason the idea’s come back has more to do with the swamp-dwellers who profit off big government contracts than with the science involved, the efficiency needed to bring 5G to life quickly, or the ability of firms in the private sector to make it all happen. It’s crony capitalism at its worst.
The best way to get to 5G is to allow the best minds and best engineers in the best firms to develop competing technologies – with the winner to be chosen in the marketplace. The plan being pushed yet again by the DoD gives one company – in this case, most likely Rivada Networks – control of the spectrum and its allocation as well as access to the protected intellectual property of those who’d be doing the job if the Pentagon had not taken the project over. At least that’s the opinion of 19 U.S. Senators who wrote the department complaining the way it wanted to move forward “contradicts the successful free-market strategy that has embraced 5G.”
Somehow what President Donald J. Trump likes to call “the race to 5G” is again in danger of being taken over by the officials in charge of it. Instead of fair competition, a vital future national and economic security project is being influenced unfairly by what leading congressional Democrats including House Energy and Commerce Committee Chairman Frank Pallone, D-N.J., say is a plan “specifically crafted to enrich President Trump’s cronies.”
Partisan hyperbole aside, it’s easy to see Pallone’s point. Building a national 5G network requires more than influential political connections. Rivada Networks, the company lobbying hardest to win the bid, is not exactly known for its ability to build out and manage broadband networks. Its proposal to manage FirstNet, a nationwide public safety broadband system, was shot down due to concerns at the Interior Department over concerns about the insecurity of its technology.
One might think this would give the Pentagon pause, yet Rivada’s advocates within the department say they are confident the company can get the job done and have an operating network functioning within three years. Of course these are some of the same people who have already spent more than a decade and hundreds of billions or more on the development of the new multi-service Joint Strike Fight and still haven’t gotten it right.
Chairman Pai, a national hero for his work preventing the Internet from coming under the thumb of the U.S. government as a regulated utility, has dismissed the effort to get to a nationalized 5G run by the Pentagon as being a costly and counterproductive distraction from what America ought to be doing. The federal government moves slowly by design. Processes that work quickly in an authoritarian country like China don’t work in America. Here, roadblocks and rulemaking are the order of the day. Washington can’t compete with the U.S. private sector. In Beijing, the private and public sectors are indistinguishable.
Thanks to President Trump, Chairman Pai, and others who understand the stakes, America is a lot farther down the road to a working 5G network than people might believe. Thanks to a competitive market where the nation’s three largest carriers have all prioritized building the nation’s biggest, fastest 5G network, we’ll get there faster and in better shape than if we let the government do it.
In late spring, oil prices dipped below zero for the first time ever. Futures contracts for May delivery traded as low as negative $37 a barrel, as producers and speculators paid refineries and storage facilities to take excess crude off their hands.
In some sense, this historic moment was inevitable. Oil markets are completely saturated. Worldwide coronavirus lockdowns have depressed energy demand. And in March, Saudi Arabia and Russia announced they would increase production, thus exacerbating the glut.
President Trump has tried to help beleaguered U.S. producers. He recently mediated a deal between Saudi Arabia, Russia, and other major oil producers, who collectively agreed to cut production by nearly 10 million barrels a day.
But prices are still falling. And now, the White House is toying with other ways to prop up U.S. oil producers, ranging from tariffs on imported oil to direct cash payments to energy companies.
This desire to help energy companies, and the millions of workers they employ, is commendable — but ultimately counterproductive. In the long run, the industry will emerge stronger if the White House allows the free market to resolve this crisis.
This pandemic-induced economic crisis is going to be painful for the energy sector. Cost-cutting and layoffs are already underway.
But the industry is strong and adaptive, and has bounced back from past crises by investing in technology. In fact, economic pressure encourages the kind of innovation and belt-tightening that helps companies thrive in the long run.
The United States last faced low oil prices in 2014 and 2015, when Saudi Arabia ramped up output to try to cripple U.S. producers that specialized in fracking — a technique used to extract oil from underground shale rock. By early 2016, prices had dropped below $30 a barrel, well below what U.S. shale producers needed to break even.
The government didn’t come to the rescue, which forced frackers to get creative. They researched how to extract more oil for less, and came up with a variety of new techniques, like drilling several wells simultaneously and using drones to detect faulty equipment. As a result, the average break-even price for frackers dropped from $69 a barrel in 2014 to an average of $40 a barrel by 2017. Had the government tried to solve the problem by slapping tariffs on Saudi crude, the U.S. oil industry likely would have never set its all-time production record of 13.1 million barrels a day in February.
We can be confident the U.S. energy industry will apply its ingenuity to this crisis, too — because these days, it excels at invention. In 2019, the oil and gas sector increased adoption of digital technologies, including cloud data storage and new software. Over the next five years, digitizing could slash the cost of oil production by almost 10 percent.
By using sensor technology — tiny, data-tracking devices attached to oil-field gear — producer ConocoPhillips recently cut in half the amount of time it took to drill new wells in South Texas. Other companies are using data analytics to search for the best drilling locations.
In short, the pressures of a downturn are likely to encourage even more future-focused transformation. The industry doesn’t need to hide behind tariffs. If we trust the free market to encourage creativity, in the long run, we’ll all benefit from a cheaper and more efficient energy supply.
America used to be the place where, as Emerson is said to have observed, the person building the better mousetrap could be assured the world would beat a path to their door. We were driven by an entrepreneurial spirit that led to an increase in global living standards and produced some of the great advances of mankind.
Nowadays the pathway to prosperity is blocked by plaintiffs’ lawyers, federal and state regulators, crusading consumers advocates, environmental activists and others who believe the only institution on which we can rely to solve the really big problems is government.
That’s a shame because the spirit of free enterprise problem-solving is still alive and well. Everyone who realizes there’s profit to be made coming up with solutions are hard at work doing what so many of the so-called smart people say is impossible.
“We are a nation that knows how to solve big problems when we set our minds to it,” says Nate Morris, the CEO of Rubicon, a technology company at the leading edge of 21st century waste management. “Waste is a big problem, and we should not wait for someone else to try to solve it. We should do the work, we should use innovation and free markets to drive transformation, and we should build a stronger, more resilient economy in the process.”
The numbers alone are scary. According to some estimates over the next ten years nearly 95 million metric tons of plastic waste the United States once sent to China for permanent disposal will have to go put elsewhere thanks to import restrictions.
Whether or not it can be done, an effort must be made to try. Right now there are two approaches: one, as typified by Rubicon’s efforts, relies on innovation, investment, and consumer-driven demand to creates a new infrastructure relying more on the use of recycled goods to manage waste and prevent the build-up of discarded plastics and other items the American shopper depends upon. The other approach, the one government regulators, social justice warriors, and those like them prefer is to the use and manufacture of certain items no matter how expensive, inconvenient, or comparably unsafe the alternatives might be.
On Wednesday Rubicon issued a report, Toward a Future Without Waste, that shows how technology-based solutions can increase the proliferation of sustainable products The evidence comes from its experiences delivering results for its customers, with plenty of examples demonstrating the market-based approach to waste and emissions reductions works. The company found, for example, that local governments could generate significant cost savings while sending fewer materials to landfills through the making better use of technology.
Using the RUBICON SmartCity technology suite “helped the city of Atlanta save up to $783,453 annually while reducing the recyclables going to landfill by 83 percent by adjusting the city’s solid waste service schedule,” according to the report. As one estimate has it, it has the potential to save US cities up to $208 million over the next 10 years through reduced disposal costs, optimized fleets, and other metrics. For cash strapped urban centers like Atlanta, that’s money that can instead be channeled into childhood conservation education and other environmental stewardship projects that can create a pathway to the clean air, water, and environment everyone wants but is so often too expensive to get, we’re told by experts, without draconian changes to the way we live our lives.
Advances in technology have also made it easier to dispose of products that are hard to recycle. The fast-food chain Chipotle partnered with Rubicon to create a mail-back pilot program at 25 of its locations to keep single-use gloves out of landfills. From April 2019 through December 2019, the report says, more than 625,000 gloves were recycled, giving the company plenty of incentive to expand the program to all its stores.
“There are currently two ways to make money from waste. One is by setting up the equivalent of a utility, where big corporations and big government agree to a one-size-fits-all approach, charging businesses and households to haul away their waste and bury it,” Morris says. “The other is a free market-based, dynamic approach: cooperate with others and innovate to help people reduce or reuse more of their waste— and inspire a new generation to build on our progress to bring about the end of waste as we know it.”
This is the kind of private sector, technology-based innovation that can change the planet for the better while adding favorably to the corporate bottom line. It requires no government regulation, no special licenses, and no additional fees to bureaucratic institutions that “feed the beast” while giving us all a cleaner world to live in.
Working from home is a massive lifestyle change, but there are things you can do to make it easier.
This week, owing to the coronavirus, many Americans are going to experience the highs and lows of working from home. While there are definite plusses to working from your own abode, it’s not all sliding across the living room floor in a white button-down shirt and socks.
I have worked from home for the past two years and it takes discipline, fortitude, and a solid work ethic. I have none of these things. So how do I manage?
A friend years ago told me that his mother and father met at work. The first thing his mother noticed was that when the boss left, his father was the only one who kept working. That’s a big part of working from home, that kind of self-motivation. But there’s a flip side to that: when home and work mix you are always at home, but you are also always at work. It’s important to set some boundaries.
Small things can make a big difference when you work from home. My biggest piece of advice might be to go outside during the day. Did you ever have that thing where you neglected to drink water for several hours and then you feel awful and you’re like, What is wrong with me? Then you drink water and instantly come back to life? Going outside is like that when you work from home. You go stir crazy like a boiling frog otherwise.
It’s very easy for the walls to start feeling like they are closing in when you telecommute. That’s why keeping your place clean is more important and more difficult. Not to sound too much like Jordan Peterson, but a messy place makes it harder to work effectively.
If you work 9-5 outside the house, then you probably spend about 7 or 8 hours awake in your house at most on a weekday. Now you will be doubling that, and the more time you’re in your place the messier it gets. Trust me on this: it accumulates fast. Maybe while everyone is hoarding toilet paper you can hoard some paper plates. Work is an excellent excuse to not do the dishes.
Now, this is going to sound pathetic and sad, but social media can be your friend when you work from home. In an office environment you chitchat, water cooler yak, call it what you will. The day at work is sprinkled with social interactions. Checking in on Twitter or Facebook isn’t just a time suck when you work from home; it helps keep you sane by socially interacting, albeit imperfectly, with other people.
Another thing worth considering is the concept of a virtual commute. Whether your IRL commute is short or long, it’s probably riddled with ritual. You might stop for a bacon, egg, and cheese, read the Post on the subway, pull in for a Half and Half at Dunkin, etc.
The commute is a home to work limbo. You aren’t working, but you are compelled to be where you are. Giving yourself a half-hour before and after working with similar rituals, like listening to a podcast, reading a book, or playing a game on your phone can help.
The most overwhelming thing about working from home for an extended period of time is that it is a lot of time in your own head. Traditional workspaces are full of novel diversions and distractions; your place is kind of just your place. In the absence of external stimulation, your mind turns in on itself, which can be a little jarring. Weird stuff will pop into your head. If you get mental claustrophobia, take some breaks. There’s no reason the rhythm of your workday has to be the same at home as it is at work.
Making your home your office, especially if it goes on for a long time, is a major lifestyle switch. But it’s one that is in many ways under your own control. Give some thought to what you want it to be like, how you want it to flow, and experiment with schedules and work patterns that work for you.
Finally, when you close the laptop, close the laptop. This is easier for some of us than others. As a journalist I’m always at work in some sense; news never stops, especially these days. But I still need to carve out time to log out and watch a movie, or do some cooking while listening to music. Let your home become your home again — at least until you wake the next day and start it all over again.
The Right to Life, Liberty and the Pursuit of Happiness
Through most of human history, there have been two ways by which humans have organized themselves: tribal and totalitarian. Tribes were based on families which came together to form clans, which combined to create tribes. In the end, what united the various clans into a tribe was the culture they all shared – language, values, customs and religion.
The primary driving factor in this move toward greater numbers was the greater power -and defense – afforded by greater numbers and the greater accumulation of wealth made possible by a greater variety of skills and a heightened group ability to take on ever larger projects, such as cities, roads, dwellings, and monuments. These factors eventually resulted in cities and nations. And empires. It was called “civilization”.
Nevertheless, the original loyalties to families and tribes have remained forceful elements in all societies.
As the more advanced “civilizations” grew in power and wealth, they grew also in territory, mostly by conquest of other countries. The management of the conquered territories was solved by the creation of a hierarchy of different classes of inhabitants: the ruler, his direct followers (usually military), the wealthy who provided financial resources, and the poor who were the vast majority of the population, whether slaves, peasants, serfs or servants, who supplied the labor on which the entire nation depended.
The average life span of mankind was about 35 years. This pattern, with a few exceptions, endured for most of human history. Until the 18th century. Then human life began a radical series of changes. Between 1700 and 2020 human life span grew from 35 years to over 70. The average annual income grew from a few dollars a year to $10,000 a year. (Gallup 2012) And world population grew from an average 1% annual increase for 1000’s of years to about 610 million in 1700 and nearly 8 billion in 2020 (Source: Worldometer).
It started with one of the exceptions to totalitarianism mentioned above. Beginning in the early Middle Ages (c. 11th century), some European merchants began to form caravans to travel from place to place buying and selling merchandise. Since merchants in general were discriminated against in Medieval Europe, they were not subject to any specific prince, and they were freemen. They soon banded together for defense against outlaws and princes alike forming the Great Caravans of Europe. In time, many began to accumulate wealth and became bankers as well as traders. They were the first middle class, instrumental in the formation of the guilds of tradesmen which consolidated the identity of a middle class — neither nobility, peasants nor clergy – all of whom opposed them. [Note: trade routes and caravans existed throughout the ancient world from time immemorial but were not “free” of any jurisdiction.]
There had been numerous intellectuals who taught the separation of Church and State (including Thomas Aquinas in the 13th century) and limited government (e.g. John Wycliff in the 14th century), but none had the political and financial strength to effect cultural changes. In the 18th century, however, these ideas combined with an emerging middle class to begin the most radical change in history. A new movement came into being and caused the revolutions which characterized the next two centuries, from the American Revolution (1776) to the Russian Revolution (1917).
This movement yielded three major views of how a country should be run: socialism and democracy as political systems and capitalism as an economic system. All involved the overthrow of the totalitarian government. The difference was in who led the revolt: the Europeans (and later the South Americans and others) were led by the poor people; the Americans by the middle class. The poor people had no experience of handling money or building an economy. They considered the wealth of the nobility a bottomless pit and they invented socialism. The Americans were led by wealthy middle class lawyers, plantation owners and merchants. They feared the power of governments and invented democratic capitalism.
The different types of socialism will be discussed next week. But first, let’s look at America’s democratic capitalism.
The American Way
The United States of America was a land controlled by people who had escaped both the walls and the comforts of the Old World and had survived in an environment which rewarded courage, skill and endurance, rather than birth and privilege. Their bias was against rather than favorable to government. They saw government as a greedy king out to take away their liberty. They therefore fashioned a government which was limited in every way by competing forces: the federal government by the states, the president by the legislature, each House of Congress was limited by the other, everybody by the courts – and so on down the line to the local dogcatcher.
The purpose behind this design was to keep government officials from ascending to the powers of that old king. They understood intuitively the saying of John Lord Acton a century later: “Power tends to corrupt, and absolute power corrupts absolutely.”
What they have left us is the American version of a capitalist society. It is dynamic, constantly changing. The poor may not always be poor; the rich may not always be rich. In fact, most Americans (58.5 percent) will spend at least one year below the poverty line at some point between ages 25 and 75 according to Yale University’s Jacob S. Hacker (The Great Risk Shift, New York, 2006). The wealth of the society is expected to grow constantly through the creation of new opportunities, new products and services, new jobs, new skills, and new technologies, leading to new and expanding wealth.
For Americans, the fundamental error of socialism is that it does not account for the creation of that wealth in the first place. Government cannot confiscate what isn’t there. Socialists foresee the proverbial pie of underclass income being cut into more and more pieces; Americans keep creating a bigger pie.
America’s Democratic Capitalism
The United States of America has brought together economic capitalism and political democracy in a dynamic tension which we call democratic capitalism, and which has produced the most prosperous nation in the history of the world. Its greater attribute is that it provides hope – hope that the poor may be able to escape the bonds of poverty as so many Americans have done in the past. This hope is the shining city on the hill which still attracts the envy of millions.
It has taken Americans most of our history as a nation to achieve the balance by which capitalism is accountable to democracy, and there are still many problems to be solved. Nevertheless, Americans are always optimistic.
The motivation for individual Americans to persevere in pursuit of their personal goals is provided by the real and potential ownership of private property. No other motivator – not coercion, not slavery, not charity, not communal property – not even religion – has ever been found which can impel vast numbers of individuals in a society to be hard working and creative. Providing a good life for oneself and one’s family is a motivator above all others.
Life, Liberty and the Pursuit of Happiness – the American Way
Our history has proven that personal freedom is a necessary prerequisite for the success of this system. An oppressive government – even if well-intentioned – sucks out the initiative required to make an ever-better life for all of us. Personal freedom without economic freedom is no freedom at all. Capitalism, in a refined and mature linkage with democracy, provides the economic power which makes freedom possible.
The challenge to Americans is not to change an evil system; it is to live up to the ideals which are required for that system to succeed.
Column: How the left uses corporate America to evade democracy
Time was, CEOs of mighty enterprises shied away from politics, especially hot-button social and cultural issues. They focused instead on the bottom line. They maximized shareholder value by delivering goods and services to customers. Some businessmen still operate by this principle. In doing so they provide not only for their employees and CEOs and board members but also for the institutions—pensions, individual retirement plans, index funds, hospitals, philanthropies—invested in their companies.
That is no longer enough for many of America’s richest and most powerful. Suddenly, corporate America has a conscience. Every week brings new examples of CEOs intervening in political, cultural, and social debate. In every instance, the prominent spokesmen for American business situate themselves comfortably on the left side of the political spectrum. Shareholder capitalism finds itself under attack. Not just from socialism but also from woke capitalism.
These outbursts are not just virtue signaling. Nor is the left-wing tilt of corporate America merely a response to the “rising American electorate” of Millennial, Gen Z, and minority consumers. What is taking place is not a business story but a political one. What is known as “stakeholder capitalism” is another means by which elites circumvent democratic accountability.
Corporate managers find themselves at odds with at least 46 percent of the electorate. The divergence is not over jobs or products. It is over values. The global economy generates social inequalities as much as economic ones. Many of the winners of the global economy justify their gains by adopting the rhetoric, tastes, ideas, and affiliations of their cultural milieu. Their environment is inescapably center left.
Even so, the social justice agenda of corporate America is not only meant to appease voters, or even to placate Elizabeth Warren. Some of these businessmen really believe what they are saying. And they are beginning to understand that they have another way—through social position and market share—to impose their cultural priorities on a disagreeable public.
The trend began as a response to the Tea Party. In 2010 the “Patriotic Millionaires” began advocating for higher marginal tax rates. A few years later, when state legislatures passed laws opposed by pro-choice and LGBT groups, corporations threatened or waged economic boycotts. Large individual donations made up more than half of Hillary Clinton’s fundraising; for Donald Trump the number was 14 percent.
CEOs protested the implementation of President Trump’s travel ban in 2017. The following year, after two black men were arrested at a Philadelphia Starbucks, Howard Schultz closed stores nationwide so his more than 175,000 employees could be trained in diversity, equity, and inclusion. Earlier this summer, Nike pulled shoes featuring the Betsy Ross flag after Colin Kaepernick raised objections. Recently four major auto companies struck a deal with the state of California to preserve fuel economy standards the Trump administration opposes.
Business has provided ideological justification for its activities. In mid-August, a group of 181 members of the Business Roundtable, including the CEOs of Morgan Stanley, GM, Apple, and Amazon, issued a statement redefining the purpose of a corporation. “Generating long-term value for shareholders” is necessary but insufficient. In the words of Jamie Dimon, business must “push for an economy that serves all Americans.” A few weeks later, one of the Business Roundtable signatories, Walmart CEO Doug McMillon, announced that America’s largest retailer would end sales of ammunition for handguns and for some rifles. Once its current inventory is exhausted, of course.
“We encourage our nation’s leaders to move forward and strengthen background checks and to remove weapons from those who have been determined to pose an imminent danger,” McMillon wrote. “We do not sell military-style rifles, and we believe the reauthorization of the Assault Weapons ban should be debated to determine its effectiveness.” Note the use of the first-person plural. Of Walmart’s 1.5 million employees, more than a few, one assumes, do not believe it is necessary to “strengthen background checks” or debate “the Assault Weapons ban.”
To whom does the “we” in McMillon’s statement refer? To everyone who thinks like he does.
“You have a business acting in a more enlightened and more agile way than government,” is how one MSNBC contributor enthusiastically describedWalmart’s directive. Left unsaid is why government has not, in this case, been “enlightened” or “agile.” The reason is constitutional democracy. The electorate, like it or not, continues to put into office representatives opposed to gun registration and to a renewal of the Assault Weapons ban. And these representatives, in turn, have confirmed judges who believe the Second Amendment is just as important to self-government as the First and Fourteenth.
Much of Western politics for the last decade has involved elites figuring out new ways to ignore or thwart the voting public. Barack Obama was following in the EU’s footsteps when he went ahead with Obamacare despite Scott Brown’s victory in Massachusetts in January 2010, and when he expanded his DACA program to the parents of illegal immigrants brought here as children despite Republican gains in the 2014 election and despite his own admission that he lacked authority.
James Comey’s towering ego and self-regard compelled him to interfere in the 2016 election with consequences we can only begin to reckon. Over the last two-and-a-half years, district judges and anonymous bureaucrats have impeded and obstructed the agenda of a duly elected chief executive. A few weeks ago a former governor of the Federal Reserve suggested in Bloomberg that the central bank should thwart Trump’s reelection. And in England, elite resistance to the results of the 2016 Brexit referendum and to the 2017 parliamentary invocation of Article 50 has brought the government into a crisis from which there seems no escape.
In such an environment, one begins to see the appeal of nongovernmental instruments of power. What might be rejected at the ballot box can be achieved through “nudging” in the market and in the third sector. If you can’t enact national gun control through Congress, why not leverage the economic and cultural weight of America’s largest corporations? The market, we are told, is not a democracy.
Oh, but it is. The market may be the ultimate democracy. “The picture of the prettiest girl that ever lived,” wrote Joseph Schumpeter, “will in the long run prove powerless to maintain the sales of a bad cigarette.” Woke capitalists remain accountable to consumers and to shareholders. The audiences of ESPN and of the NFL cratered when those institutions elevated politics over consumer demand. Hollywood’s anti-American offerings routinely flop. Public opinion, in the form of popular taste, rules. Shareholders of publicly traded companies are a type of electorate. The companies that do not satisfy customers will disappear. Or shareholders will demand changes to management to prevent such an outcome.
The politicization of firms is a double-edged sword. The responsible stakeholder CEOs may have the best of intentions. They might assume they are doing the right things not only by their companies but also by their societies. What they fail to understand is that corporations acting as surrogates of one element of society, or of one political party, will not be treated as neutral by other elements, by the other party. By believing their superior attitudes will save capitalism, our right-thinking elites are undermining its very legitimacy, and increasing the severity of the ongoing populist revolt.
Something feels off in the timing of our debate over the economy. A loss of faith in free markets, among intellectuals and the public alike, was only natural in the 1930s. But today? Intellectuals on the left and the right are more convinced than ever that our economic policies are deeply misguided, at the same moment that unemployment rates and wage growth are the best they have been in decades. When Americans answer polls, they express less and less confidence in free-market capitalism — even as they express more and more satisfaction about economic conditions.
Perhaps people are evaluating these questions against different time horizons. They may, that is, think that the economy is performing well at the moment but has become less capable of delivering broad-based prosperity over the course of a generation. If today’s conditions persist long enough, then, the reputation of capitalism may recover.
Timing is relevant to our evaluation in another way. If our economy has gotten worse at generating sustained prosperity, worse enough to make a loss of faith in capitalism understandable if not justified, then it matters when this decline began.
In 2015, during the last presidential campaign, Hillary Clinton suggested that “for decades” the economy had been offering a worse deal for most people. Her explanation: “For 35 years, Republicans have argued that if we give more wealth to those at the top — by cutting their taxes and letting big corporations write their own rules — it will trickle down. It will trickle down to everyone else.” The election of Ronald Reagan, in other words, was the turning point. It followed that many of his policies should be reversed: The top tax rates should go back up and unions should be strengthened.
If economic conditions have been deteriorating for an even longer period, however, then merely reversing Reaganomics might not be enough. And it is common to run into claims, apparently backed by data, that suggest as much. The Pew Research Center notes that the average wage, adjusted for inflation, fell between 1973 and 2018. It had risen steeply from 1964 (when the data series began) through 1973. Then it dropped for roughly two decades, and over the next two recovered but did not get back to its peak.
If real wages have truly been stagnant for longer than most Americans have been alive, then the economy has not worked in anything resembling the fashion we expect. Economic growth has been mostly an illusion: We have more stuff only because more of us work, large numbers of women having joined the paid labor force. If this picture is accurate, we need to make radical changes either to the economy or to our expectations of ever-rising prosperity.
There are, however, two big reasons to doubt the stagnation thesis. The first is that non-wage benefits have become a larger and larger element of compensation. Perhaps they have become too large an element: The tax code encourages employees to get health insurance through their companies rather than take higher wages and buy coverage themselves, and there are reasons to think we would be better-off if the tax code did not do that. But non-wage benefits have economic value to employees, and so looking at wages alone will cause us to underestimate employees’ material welfare.
The second reason for doubt is that a common method of adjusting for inflation — the one used in the Pew numbers cited above — overdoes it. The center-right social scientist Scott Winship has been indefatigable in explaining why using the Consumer Price Index (specifically a measure called “CPI-U”) as the gauge of inflation is a mistake, and how it warps our understanding of economic trends. It overestimates housing inflation before 1983, and ignores how consumer behavior responds when prices change.
Since inflation compounds, small errors each year add up to major changes over decades. Use a better measure of inflation, one based on personal-consumption expenditures, and the average wage rose by 21 percent from 1973 to 2018. (Average compensation must have risen more.)
The data on median family income also show a reassuring amount of growth. The family in the middle of the pack in 2015 made 45 percent more, with the right inflation adjustment, than its counterpart in 1970.
But the same numbers may also explain some of the public’s dissatisfaction with the economy. Median family income grew by a spectacular 58 percent in the 15 years from 1955 to 1970, then grew another 11 percent from 1970 to 1985, and 24 percent from 1985 to 2000. But the median family income of 2014 was slightly lower than it was in 2000.
What happened is that after the turn of the millennium we went through an extended period of slow growth punctuated by one mild and one severe recession. Median family income dropped more than 7 percent from 2007 through 2011, the sharpest decline since this data series started in 1953. It did not recover completely until 2015.
We have had a few good years since then. But it is not surprising that during the last two decades many Americans came to feel that their economic circumstances were stagnant and insecure. It is not surprising, either, that many of them have the sense that things used to be better — or that a generation of young people who started their work lives in a slow-growth economy tend not to have positive attitudes toward capitalism.
Instead of five decades of economic stagnation, we have had two decades of weak growth. That record does not suggest that the pro-market policies of the 1980s and 1990s were fundamentally mistaken. It suggests, rather, that we have discrete problems that deserve to be tackled.
High on the list of needed changes should be a reform of our monetary regime. It failed badly over the last dozen years. In 2008, excessive fear of inflation led the Federal Reserve to signal that it was going to tighten monetary policy even as the economy was sinking into a recession. It kept monetary conditions too tight after the crisis hit, too, for example by encouraging banks to hold additional reserves. These policies made the recession more severe and the recovery weaker. That these failures are not more widely appreciated is symptomatic of the misguided thinking that continues to govern monetary policy.7
Reforms should be undertaken in other areas, too. Our higher-education system is not working for most young people. Our immense health sector includes immense inefficiency. Regions of the country with high economic growth have imposed regulations that make it prohibitively expensive for less fortunately situated Americans to move there.
So we are called to be ambitious, but not revolutionary. Capitalism does not need to be overthrown or even rethought. Rather, the principles that make markets work need to be applied to some areas where they have not been present. Our economic system does not need dismantling. But it does