Being Wrong and Being Smart

“If a machine is expected to be infallible, it cannot also be intelligent”
— Alan Turing

This quote got me thinking. I asked TK Coleman what he made of it and we had a pretty interesting discussion on the relationship between being right and being intelligent.

If you’re always wrong, you’re not smart. But if you’re never wrong does it mean you are smart? TK said no, and I think I he’s onto something.

Imagine someone who plays Trivial Pursuit. Getting a lot of answers right is impressive. But if someone gets every single answer perfectly correct every single time, something’s up. They memorized all the correct answers. They’re unerring, but also kinda dumb.

Why is it dumb?

For one, it’s on odd use of time. Who would determine that getting every answer right in Trivial Pursuit was worth the time to memorize all the cards vs all the other things you could do with that time? Probably someone who has a perspective that’s out of whack. Maybe they overly value winning a meaningless board game. Maybe their opportunity cost is low.

Another problem is that it signals a misunderstanding of the point of Trivial Pursuit. It is meant to be a challenge. It’s fun when you know some, but not all the answers. It’s fun when you have to work to remember and make associations. To memorize all of them and never miss is to not play the game everyone else is playing. It shows a kind of social stupidity.

It might also imply fear or arrested development. If Trivial Pursuit cards can be memorized, why not apply that brain power to a new, bigger challenge? Why stick with games you are guaranteed to win? Engaging only in activities where you’re the school yard bully signals something missing in your motivator.

The analogy isn’t perfect, but you get the general idea.

So maybe what Turing and TK are getting at is that intelligence is more complicated than knowing stuff. Maybe it’s about ability to learn. Maybe it’s about change and progress. Progress can’t happen without new challenges. New challenges are, by definition, full of unknowns. Unknowns mean you won’t know the right response every time. You’ll get stuff wrong. You need that feedback to incorporate into your worldview so you can alter your understanding, then get it right. The process is intelligent, even if the answers at individual steps are sometimes wrong.

Maybe to be infallible is to me immobile.

I’m not sure if this is what Turing meant, but there’s something in it.