Mimsy Were the Borogoves

Mimsy Were the Technocrats: As long as we keep talking about it, it’s technology.

Innovation in a state of fear: the unintended? consequences of political correctness

Jerry Stratton, October 10, 2018

King Ludd: The Leader of the Luddites, Drawn from Life by an Officer. May 1812.; Luddism; King Ludd

There is always a culture clash between those who understand how productive industry actually works and those who gape at it like savages, believing it to be some kind of Heap Big White Man Magic. And where there is Magic, there are Sorcerors and Demons; for most people, particularly those of the primitive mindset, the large cloud of Unknowns is filled in by their imaginations with malice, conspiracy, and deviltry.1

There’s been a lot of talk lately about how the software built into self-driving cars is racist. But the problems we’re facing are not that the software is racist, nor that the programmers are racist. Most, if not all, of these problems would be solved long before the technology were placed in a car if it weren’t for two potentially huge problems in software development today. Self-driving cars are at the forefront of both: a top-down desire to computerize and control on the part of the left, and a growing fear among innovators of research and technologies that might draw the attention of social media mobs.

Software that can discern which shapes and colors in its sensors are persons and which are not is a problem with myriad applications. Under normal circumstances, that problem would be solved for less dangerous applications long before the technology were used in vehicles. Unfortunately, there is a growing fear in the technology industry of making gadgets that accidentally offend, resulting in a social media crusade against either the company or the individual programmers that made the gadget or software.1

Both of these are part of a a bigger problem, which is that progressives for the most part despise progress. The only progress they support is toward more government power, which is usually a regression to barbarism, not progress toward civilization.

Anything that improves the human condition—abundant food, cheap energy, easy travel, water management—is an evil that must be stopped. Even to the point of regretting the invention of fire. A big example from recent memory is California’s water shortage after a relatively short drought. In sane times, California would never have had a crisis just because of normal cyclic changes in rainfall. They would have built the dams they needed, decades ago, to withstand an easily foreseen temporary reduction in rainfall.

Physical progress—dams, roads, pipelines, farms—can be stopped through regulatory warfare. But software is very different. Software doesn’t impact the environment2, it can be created by individuals and very small teams, and it can be quickly iterated through versions to provide a better human experience.

So progressives try to block software advancement with virtual equivalents of physical regulations, such as imposing net neutrality on the Internet. Net neutrality is an Orwellian way of saying that “things people want to do” can’t be prioritized over “things people don’t want to do.” Net neutrality works against the market forces that improve our lives—that, in fact, made the Internet the amazingly useful thing it is today. What net neutrality means is that companies can’t provide better service for their customers3 by prioritizing what their customers want. Nor can they prioritize based on what services their customers subscribe to. This is the essential feedback loop that gives us great things and great service.

But net neutrality is not the only delta smelt in the virtual world. Part of the reason for the extreme intolerance of the left when it comes to software bugs is that finger-nanny mobbing is one of the few means by which software developers can be scared off of making innovative software. Piling onto developers, requiring quotas rather than hiring on competency4, and imposing totalitarian-like codes of conduct on development teams, encourages a culture of finger-nannying and makes programmers think twice about moving into any new area, especially one that involves people.

Automobile software algorithms that have trouble discerning human from non-human have this problem partly because they haven’t been tested in lesser applications. The reason is because those lesser applications aren’t worth the trouble of facing the intolerant left. Google’s photo app that mistakenly labeled a gorilla as a black man didn’t kill or injure anyone as a car would. But using new, innovative software in a photo app isn’t worth the trouble that will inevitably arise when the bugs that all software contains offend someone on the left. That’s why, rather then perform the iterative fixes that would be necessary to improve their facial recognition, or iteratively improve their testing processes, Google simply removed all gorillas from the app. No gorillas, no accidental facial recognition. And no chance of fixing the underlying problem before the technology moves into more dangerous areas such as self-driving vehicles. But fixing the problem would mean iterating through solutions that lower the false identification rate but not zero it. Google decided improving the software wasn’t worth the potential cost of facing social media mobs.

The dangers are all the worse because the kinds of projects that will move forward in this environment of fear are big, top-down projects like electric or self-driving cars. Projects where the potential for harm is far greater simply because the products are used in far wider areas and put into play sooner than they should be. Untested, wide population, features by committee. It’s a recipe for disaster.

All software contains bugs. Some of the bugs are going to be embarrassing. To pillory developers when their software contains bugs is to discourage software, period. The best we can hope for is to use software in less-demanding applications before we use them in more-demanding applications, so that the bugs can be found before they become dangerous.

Some software is better-tested than other software, but all software will exhibit unintended behavior unless it is not being used. It is the nature of software. It is perfectly reasonable to be intolerant of software flaws after they have been reported and they have not been fixed. It is completely unreasonable, and dangerous, to be intolerant of bugs when they are discovered the first time. In retrospect, it will always seem that a particular bug should have been obvious. But there are near-infinite bugs in all useful software; hindsight is useful only if you want to paralyze software development in lesser applications where new ideas are best introduced.

Almost all of the software being tested in cars today should also be tested in other applications, so as to progress through iterations to the potentially amazing revolution of self-driving cars. But it isn’t worth braving the intolerance of progressives, who, of course, hate progress and so don’t see their intolerance as a problem.5 This is especially a problem for the smaller companies, less able to withstand a social media mob, that would normally be making innovative software for use in smaller applications with a smaller customer base.

Fear breeds paralysis. As the mobs get worse, more progress will, instead of iterating safely and slowly, happen in dangerous spurts—or not at all. The more successful the left is, the more dangerous their stranglehold on progress becomes.

One of the ways you can know that progressives prefer blocking progress is that their recommendations tend to be insane unless you look at them from that perspective. For example, when trying to flag criminals, the recommendation is not to improve the success rate until the predictions are correct, but rather to degrade the success rate until the predictions are fair:

For a machine-learning algorithm that exhibits this kind of discrimination, Hardt’s team suggested switching some of the program’s past decisions until each demographic gets erroneous outputs at the same rate. Then, that amount of output muddling, a sort of correction, could be applied to future verdicts to ensure continued even-handedness.

“Correction” is an Orwellian euphemism for tell the software it’s wrong when it’s right. Rather than make it less wrong for black defendants, make it more wrong for white and Asian defendants. The correct solution is to expect the humans using the AI systems to override the AI when it is wrong, not to make it wrong more often. Train the AI to make better decisions in the future. That’s the point of AI, that it can interact with humans who can train it.

That, however, would expose those humans to social media mobs, which is part of why businesses want to offload such predictions away from humans in the first place.

The left’s tendency to use Orwellian terminology is a similar example. Recently people became outraged that a Google search on “are black people smart” brought up a racist diatribe that “Blacks Are the Least Intelligent Race of All”. The reason for this was not anyone gaming the search engine, nor was it that Google’s algorithms were racist. The problem, again, was political correctness. Remember when Apple’s Siri didn’t show any links to Planned Parenthood when people asked for information about abortion? Planned Parenthood deliberately avoided using the term abortion even though that’s what they do. Ask it for family planning and Siri would show all sorts of Planned Parenthood links, even though Planned Parenthood doesn’t have anything to do with family planning, only prevention.

The automotive industry has long been afraid of standing out but if Google’s reaction is any indication, the fear is spreading. And the same happens in genetics research. Any non-racist who researches the genetic basis for intelligence cannot discuss any racial factors, or they will no longer receive funding and may well lose their job for engaging in hate speech.

If the only people allowed to discuss a topic are those who are wrong, then only wrong results will show up in a search on that topic. Just as, if you’re only allowed to talk about something using a euphemism, it will only show up when that euphemism is used, and not under the real term.

But it gets worse: if a good researcher does manage to overcome the lack of funding, and somehow manages to find a publisher willing to face the mob, they still aren’t going to be shown in search results. Modern search engines rely on crowd intelligence, which works amazingly well—as long as the crowd isn’t threatened by loss of jobs and friends. Not only will good research not be linked, when it is linked it will be called racist. Which means the research showing that racists are wrong will only come up in searches for racist interpretations of intelligence, while the racists, who don’t consider themselves racist, will come up for normal searches.

This is not a fault of the algorithms. It is a fault of political correctness—euphemisms and the two-minutes hate for important lines of research. Even if search engines adjust their results to account for political correctness, necessary research to save lives still isn’t being done.

This has already happened: medical researchers have become afraid to say that men and women are biologically different, and that because of this it is critical to research men and women separately. But men and women are different regardless of what social media mobs say, and this lack of research has killed women both directly and by omission. Both cures and diseases affect men and women differently.6

If political correctness is allowed to advance a culture of fear in science, programming, and engineering, it will not only endanger us by holding back progress. It will mean that the new technologies which are developed will become more and more dangerous. Combined with the affinity of governments and the left for influencing policy by leveraging larger businesses and using top-down mandates, these dangers will affect larger and larger populations.

In response to The plexiglass highway: Government bureaucracies can cause anything to fail, even progress.

  1. I have a suspicion that the reason Duolingo’s error-reporting option is so vague is that they want to maintain a plausible deniability.

  2. Though I do occasionally see trial balloons into saying it does, and thus computer time ought to be rationed.

  3. The original title or description of this Verge article was, according to Techmeme, “AT&T mobile subscribers will be able to stream DirecTV Now without using their data, as the company doubles down on disregard for the ethos of net neutrality.” Whoever wrote that wanted AT&T mobile customers to pay more for streaming.

  4. A solution which, of course, severely disadvantages the smaller startups who compete with larger corporations.

  5. The reason they like self-driving cars is that they present a great opportunity to hold back progress, by tying all individual travel into central control systems, even to the point of sucking your car battery overnight.

  6. It is very weird, reading articles about why scientists tend not to acknowledge sex differences, to not see any reference to scientists, academics, and programmers who have lost their jobs for doing so. But imagine how research will be affected if the left can attack scientists they disagree with via accusations as vague as those recently made against Judge Kavanaugh.

  1. <- CDC: Beware of Leopard
  2. Let them eat solar ->