Mimsy Were the Borogoves

Mimsy Were the Technocrats: As long as we keep talking about it, it’s technology.

42 Astoundingly Useful Scripts and Automations for the Macintosh

Work faster and more reliably. Add actions to the services menu and the menu bar, create drag-and-drop apps to make your Macintosh play music, roll dice, and talk. Create ASCII art from photos. There’s a script for that in 42 Astounding Scripts for the Macintosh.

Future Snark

Jerry Stratton, October 13, 2021

Future Shock

Future Shock remained influential throughout the seventies and into the eighties.

I’ve been reading a lot of books lately about the future of technology from the perspective of the previous century. Predicting the future is always difficult, of course, but two stood out for how well they recognized what the future would have to deal with. Alvin Toffler’s Future Shock is from the perspective of the sixties (it was published in 1970), and Cyberwar is from the perspective of the nineties.1

Both got the future badly wrong, despite usually seeing where the transformations would come from. I think the problem for any futurist is aptly summed up by Harlan D. Mills in his introduction to a 1978 book about programming:

Back in 1900 it was possible to foresee cars going 70 miles an hour, but the drivers were imagined as daredevils rather than grandmothers. — Harlan D. Mills (BASIC with style)

Technology policies made by someone who cannot comprehend the future equivalent of 70 miles per hour will always be wrong. It is nearly impossible to jettison the spectacles through which we view the world. If you’ve rarely gone faster than walking speed2, the notion of seventy miles an hour is one of reckless abandon. Even if you were to foresee that this would be a casual speed—I drove 75 miles per hour today merely getting my groceries—your forecast would be one of a world gone mad.

Seventy-five miles an hour to buy groceries? What’s the damn hurry? It’s legal? Even grandmothers do it? You’re all going to die!

A person looking forward from 1900 might well realize that vehicles will have the capability to go over 70 miles per hour. Fewer will recognize that drivers will routinely go more than seventy miles an hour. Fewer yet will realize that advances in materials design will make such speeds reliable and that advances in technology will make control of such vehicles nearly automatic.

What they almost always miss is the adaptability of the human brain and body. That we will learn how to routinely manage such speeds and that our brains will not freeze at the sight of, say, vehicles passing by on the other side of the highway at relative speeds in excess of 150 miles an hour. That our reflexes will find shifting lanes at such speeds a normal task.

Humans adapt. It’s what we do.

Even more important, with the ability to travel at such speeds we will find reasons to need to travel at such speeds. Much like the famously apocryphal Bill Gates quote about 640 kilobytes3 being enough for anybody4, there is so much we want to do, and so much we want to do that we don’t even know we want to do it, that we always fill the available technology.

Critically, the ability to travel routinely at 70 miles per hour is one reason why grandmothers are healthier and look younger than their 1900s counterparts. Health care is more readily available. Food is more readily available. Travel, and exercise, is more readily available. Experts have a tendency to panic because they don’t see the second-order consequences, the third-order consequences, of innovation. Consequences that save lives and make living better.

One of the biggest flaws of the social reformers of any time isn’t recognizing the restrictions of bureaucratic governance. It’s a blindness to any solution outside of more bureaucratic governance. I first ran across this phenomenon when I read David Goldhill’s otherwise very good Catastrophic Care, Goldhill spent an entire book explaining why our health care system is killing people—then prescribes more of the same, but harder. By everything that Goldhill had written in his book’s previous chapters, his solution is guaranteed to kill people even more people. It’s a common failing of policy experts venturing into the political arena, and Toffler does the same thing. Toffler sees, correctly, that our recent technological change has been breathtaking:

…if the last 50,000 years of man’s existence were divided into lifetimes of approximately 62 years each, there have been about 800 such lifetimes. Of these 800, fully 650 were spent in caves. Only during the last 70 lifetimes has it been possible to communicate effectively from one lifetime to another—as writing made it possible to do. Only during the last six lifetimes did masses of men ever see a printed word. Only during the last four has it been possible to measure time with any precision. Only in the last two has anyone anywhere used an electric motor. And the overwhelming majority of all the material goods we use in daily life today have been developed within the present, the 800th, lifetime.

What he misses, however, is significant. He recognized, for example, the psychologization of the economy, that people would buy not just clothing and food for its psychological value, but that “Great, globe-girdling syndicates will create super-Disneylands of a variety, scale, scope, and emotional power that is hard for us to imagine.”

He predicted this partly from what he called the “psychic loading” that airlines used to entice business travelers to use their service over their competition’s, from meals to sexy stewardesses. From this, he extrapolated sexoticism and super-Disneylands, but what we got instead was… affordable air travel. Airline travel is no longer limited to businessmen, and so there is no longer any reason to target ads to travelers on expense accounts.

What he missed was that the reason the airlines appealed in the manner they did is that they were a heavily regulated industry that didn’t have much leeway outside of the bounds of the government bureaucracies that controlled them. He knew that those bureaucracies were dying; he didn’t put together that this meant airlines would be able to compete on getting from point A to point B—their core function—at lower prices, rather than on the extras that don’t get us where we want to go. Every once in a while someone nostalgic for the stewardesses and food of the old days tries to start up an old-style airline. And it fails, because what people want is to get where they’re going with money left over to enjoy it.

Critically, Toffler recognized what few recognized at the time: that the rise of computers and automation threatened not to crush us under a stifling bureaucracy but rather to overthrow stifling bureaucracy.

Complicated corruption

…bureaucracies are well suited to tasks that require masses of moderately educated men to perform routine operations, and, no doubt, some such operations will continue to be performed by men in the future. Yet it is precisely such tasks that the computer and automated equipment do far better than men. It is clear that in super-industrial society many such tasks will be performed by great self-regulating systems of machines, doing away with the need for bureaucratic organization. Far from fastening the grip of bureaucracy on civilization more tightly than before, automation leads to its overthrow.

But somehow he managed to turn that into a need for better forms of stifling bureaucracy.

I’m not using “stifling” as an exaggeration. It is literally what his new bureaucracies would do: stifle technological progress. The problem he saw was too much changing too fast. He predicted, from that, an epidemic of a PTSD-like ailment he called “future shock”. He didn’t see that we’d all be able to metaphorically drive 70 miles an hour once the technology let us do it.

It is the thesis of this book that there are discoverable limits to the amount of change that the human organism can absorb, and that by endlessly accelerating change without first determining these limits, we may submit masses of men to demands they simply cannot tolerate. We run the high risk of throwing them into that peculiar state that I have called future shock.

Toffler's proposed solution was well in keeping with policy experts: another large bureaucracy. Toffler’s large bureaucracies would decide what skills will be needed fifty years ahead and make sure that this is what people train for. He recommended experimental skill factories each run by different bureaucracies. His system needed lots of different bureaucracies because he recognized that large bureaucracies tend to produce monocultures. A single bureaucracy is likely to decide on the wrong skill set, much like funding bureaucracies tend to do today when choosing what medical research to fund.

When he wrote about government funding, for example, he almost hit the issue on the head: government research funding overwhelms all other funding and is directed by bureaucrats rather than by what most people want their future to be. This is what results in self-driving cars that can’t tell humans from non-humans. High resolution television sets that can’t handle a car driving past your house. Dishwashers that take hours to complete their cycles. Hospitals that are forbidden to build more hospital beds.

His solution to the problem of one bureaucratic monoculture was lots of different bureaucratic monocultures. His solution to one bad bureaucracy that runs people’s lives was a plethora of smaller bureaucracies that run people’s lives.

Further, because technology is advancing too quickly for humans to adapt, we also need a large bureaucracy to decide what scientific and technological breakthroughs are allowed. We need a bureaucracy to manage technological advancement.

Segway lineup

The latest Segway lineup reminds me of the scene in Almost Famous where they deliver the t-shirts and discover that most of the band is fading into the background.

…we must also design creative new political institutions… for promoting or discouraging (perhaps even banning) certain proposed technologies. We may wish to debate its form; its need is beyond dispute.

He recognized that it is immoral to stifle technological and scientific progress. But in the face of a crisis this big, a little immorality was justified. Someone has to block the future equivalent of the seventy-mile-an-hour car and the transistor and clean clothes. These were his actual examples (minus the bit about 70 miles an hour).

This sort of recommendation is nothing new. Throughout modern human history5 we have had crisifiers yelling that if we don’t manage human expansion from the top down, and do so immediately, we are all going to die. That Toffler’s solutions at the end of his book go against just about everything he wrote in the rest of it is also not unique.

Instead of saying, okay, government bureaucracies caused these problems, I recommend getting rid of the government bureaucracies, his solution was yet another bureaucratic doubling-down. Every new technology must be approved by the bureaucracy, which must ask “how will a proposed new technology affect the value system of the society?”

He didn’t ask “how will such a dictatorial bureaucracy affect our value system?” Policy experts don’t, as a rule, ask that question. Toffler sidestepped this question in a passing reference, but never addressed it, and it’s the most important question that should be asked of such a solution. He became a parody of Chesterton’s characterization of policy experts in Heretics:

And the weakness of all Utopias is this, that they take the greatest difficulty of man and assume it to be overcome, and then give an elaborate account of the overcoming of the smaller ones. They first assume that no man will want more than his share, and then are very ingenious in explaining whether his share will be delivered by motor-car or balloon. — G.K. Chesterton (Heretics)

Just letting people buy what they want to buy was, in his mind, no solution to the problem of choosing which technologies succeed. The wrong technologies might succeed, and then grandmothers will be able to drive 70 miles an hour!

His giant bureaucracies would not operate entirely above the masses. He would implement vast games, vaguely similar to the role-playing games that would come a few years later, to draw the public, currently “political eunuchs”, into “a continuing plebiscite on the future”. He didn’t realize that even if politicians and bureaucrats were to listen to this advice, it would still result in dangerously retrograde decisions. The people who say that they want better-looking stewardesses but only pay for getting from point A to point B are not going to provide advice that gives us better and cheaper air travel.

Several years ago, but well after Future Shock, a personal transport device called the Segway hit the news. Public agencies and corporations thought the Segway would be “more important than the Internet”, would revolutionize cities, and would require that cities be redesigned to accommodate the masses of people using them.

Cyberwar

In 1996, the Armed Forces Communications and Electronics Association published this “anthology on risks, challenges and solutions in the information age”. Great cover.

But no masses of people bought Segways. Segway, Inc., now makes more e-bikes than Segways. Had Toffler’s proposal been implemented, some government-corporate partnership would most likely have redesigned cities for a revolution that didn’t happen.

Future Shock was very influential in the seventies. A quarter of a century later, Cyberwar didn’t have to be influential; it was collected by insiders, literally a product of the military-industrial complex. It’s a series of early-to-mid nineties essays about how computer technology would advance in general, how it would be used in the future, and how this would (or should) affect military strategy.

At first blush it sounds so clear. So non-violent. So intangible. Then you get to thinking about it: Information Warfare.

Sadly, for all the money it spends the “complex” was unable to provide analysis beyond that sort of snark. Snark can lead to insight—but only if it’s followed to its logical conclusion. “By the year 2001,” wrote one of the CyberWar futurists,

…there will be over 2 billion teenagers in the world, most of them living in Asia and Latin America. Imagine trying to get a telephone call through to someone’s home in Mexico City or Beijing when that happens!

The authors did not seriously imagine what they joked about imagining. If they had, they might have recognized that grandmothers would routinely be driving at 70 miles an hour. That is, they would have imagined how telephones and telephone use would change to accommodate all those teens tying up phone lines.

The whole idea of “tying up phone lines”, that because one person in the family is talking on a phone no one else in that family can talk on a phone, is now incredibly archaic. That it would become archaic would have been the natural extrapolation of that snark. By 1996 it was clear that telephones and data were merging, and that tying up data because someone was talking on the phone was untenable. It had to change.

This, in my opinion, is why hard science fiction often gets the future wrong. It assumes that current problems with current technology will only be exacerbated when that technology spreads to the whole population and/or becomes much faster. But the best analysis is almost always not “how will society change because this is only going to get worse as technology advances”. It is “how will society change when that problem is overcome”. The smart money is always on such problems being solved.

When grandma’s driving 70 miles an hour, 70 miles an hour will no longer be a crisis.

There may well come a time when we cannot fill the available memory, when the future’s version of 70 miles per hour is too fast for us. But so far, that’s not been the smart way to bet.

There may well come a time when we must submit to a bureaucracy in order to solve an impending existential crisis. But the smart bet has always been that such bureaucracies make the crisis worse. They take away our ability to advance past what makes it a crisis.

The smart way to bet is that any attempts to manage progress, to turn back the 70 mile per hour car, will only increase human suffering. All the gridlock predicted by experts past? One of the reasons we don’t have it is that people are not forbidden from getting where they’re going quickly. Much of the gridlock we still have is because people are forbidden from getting where they’re going quickly. Government bureaucrats still see freedom of movement as a problem that must be solved through bureaucratic solutions like Vision Zero or Vehicle to Grid.

The result? People can’t get where they want to go. Supplies can’t get where they need to be. Emergency vehicles get stuck in traffic, or can’t even navigate streets redesigned to block large vehicles.

Technically, Toffler’s hypothetical Segway bureaucracy might also have simply banned the Segway and forbidden its production rather than redesign cities. But stifling Segways would have provided the bureaucracy with less power than stifling automobiles, which they want to do anyway. Technocracies almost always back the wrong technology.6

We saw this in Texas last February. An oil-rich state froze in three very long days because the sun doesn’t shine at night, turbines freeze in ice storms, and bureaucrats accidentally shut down power to the natural resources that provide power.

A long time ago, I wrote a parable, King Ludd. The gist of it was that if we let experts choose our technology we will be left with technology that only an expert could love. The result is that the process—the technology—will replace the goal—in the case of King Ludd, ending war.

We can learn a lot from looking at futurist predictions in the past. Experts turn out to have a sort of tunnel vision that exaggerates the problems of the future while blinding them to the costs of their solutions. And so they create solutions, usually overly-complex ones, to solve problems that don’t exist or would be solved anyway, without regard to the often massive costs of those solutions. In The Madness of Experts I wrote:

At its heart, conservatism, like science, is a belief in the fallibility of experts. That the wisdom of millions of individuals acting in their own self-interest exceeds the wisdom and probity of experts trying to discern someone else’s best interests.

The scientific method is designed to overcome the fallibility of people. The Constitution, and conservative philosophy, is designed to overcome the fallibility of politicians and bureaucrats.

Examining what such experts have recommended in the past helps us understand why that’s true. However, if a world where experts are always wrong frightens or demoralizes you, take heart! In the second installment I’m going to talk about John G. Kemeny, Norbert Wiener, and Vannevar Bush.

In response to The pseudo-scientific state and other evils: In 1922, following the first world war, G.K. Chesterton discovered to his dismay that the evils of the scientifically-managed state had not been killed by its application in Prussia. Unfortunately, it was also not killed by its applications in Nazi Germany.

  1. The book reads so out of date that I kept accidentally typing “eighties” instead of “nineties” in this post. For the record, Cyberwar was published in 1996.

  2. While horses could go faster than human walking speed, they usually didn’t; whether with a rider or a carriage, they tended to go three to four miles an hour. Ten to fifteen miles an hour was rushing. That bit in the movies where you’ve got the knight on a horse and the squire walking along beside him is historically accurate as far as speeds go. If you wanted speeds, you needed trains. But trains didn’t have freedom of movement; their speeds were restricted to specific paths and if they deviated from that path it was automatically a disaster.

  3. It’s almost as difficult to look back at the past as it is to look forward to the future—possibly more so. When I originally typed this, I wrote “640 megabytes”, because the idea of a personal computer with only 640 kilobytes is incomprehensible today. And that’s despite my having lived through it. My first computer only had 16k.

  4. Gates almost certainly never said this—as he says, “no one involved in computers would ever say that a certain amount of memory is enough for all time.” Programmers and users both—and the dividing line between the two was not a sharp one—were always butting up against the limit of available memory.

  5. I suspect I could just as well say “throughout human history” and leave off the “modern”, but I can’t say that for sure.

  6. Many technocrats remain enamored of trains, huge lumbering vehicles that can only go where tracks were laid years or decades past, and only on timetables that fail more and more often.

  1. <- Empowering the vicious