ANDY KESSLER: Does AI Dream of Electric Sheep?

I had a weird dream last night. I’m sure you’ve said that many times. But why? From Sigmund Freud to Carl Jung to Calvin Hall, dreams have been researched ad nauseam and there’s still no good answer to what shapes them. Unconscious desires? Keeping neurons active? Memory consolidation? Psychosexual wish fulfillment? No one knows. Even Freud supposedly noted, “Sometimes a cigar is just a cigar.”

But clues are starting to emerge from artificial intelligence and its use of brain-mimicking neural networks. In 1968, Philip K. Dick published the novel “Do Androids Dream of Electric Sheep?”—on which the 1982 movie “Blade Runner” was based. Turns out the answer is yes. . . .

In 1970 Finnish student Seppo Linnainmaa proposed “back-propagation”—again oversimplifying here—sending errors backward through neural-network layers to generalize the weightings and better find patterns. It wasn’t until a 1986 paper “Learning representations by back-propagating errors”—whose co-authors included Geoffrey Hinton, a patriarch of neural-network research—that back-propagation research took off. It took another 20 years for the method to scale, which is why voice and facial-recognition AI work well today.

Now the current theories: In 2020 neuroscientist Erik Hoel, then a professor at Tufts University, postulated that human brains get stuck overfitting in the same way and need to generalize to overcome the problem. Mr. Hoel suggests the brain “does that by having wild, crazy experiences every night.” He hypothesized that “dreams as a form of purposefully corrupted input likely derived from noise injected into the hierarchical structure of the brain.” Sound familiar? Yes, maybe dreams are back-propagation that inject noise and errors to unfit or generalize our own pattern recognition. That would explain why our dreams are so weird.

Interesting.