Thoughts on the Neuralink Demo
These essays/posts are sent out to my newsletter. You can sign up below if interested.
9/25/20
I recently watched Elon Musk's demo of the Neuralink brain-computer interface. The event itself was a self-described recruiting event, and much of the Internet yawned at the lack of a real announcement. The most interesting thing to come out of it was not from Musk, but rather from AI researcher/podcaster Lex Fridman — in particular, Fridman's extensive musings (podcast link) on the potential for the technology.
Lex does an excellent job on covering big areas for this technology: alleviating suffering, understanding consciousness, human augmentation, virtual reality, variations on telepathy, immortality, and implications for AI. He spends less time on the more dystopian potentials for this technology, which are enormous (and which, to his credit, Musk is very aware of).
I then listened to a writing podcast where the hosts assumed, given the "demo", that the Neuralink technology was nearing reality. For most of Lex's list, it is quite far away.
I was reminded of the term “macromyopia” — the human tendency to overestimate the short-term impact of a technology and underestimate the long-term effects. I first heard the term from Mitch Kapor in 2006 when I was working on virtual worlds and virtual economies. At the time, our startup was completely guilty of over-estimating the impact of virtual worlds. We were way too early and thus died (years later, it's still too early for mainstream virtual worlds). Needless to say, the term made an impression on me.
One of the most exciting, and most dangerous, aspects of this brain-computer research is that in order to get anywhere we need to understand how thought and memory actually work. When I was doing research for Becoming Monday, scientific consensus seemed to be that memories are really simulations. In other words, unlike a computer which stores exact details and facts, our minds store sketches, which we fill back in as needed (in the case of my brain, usually quite poorly).
We don't understand the true purpose of a simulation-based approach to memory, or really how it works. I don't see how one could create an interface without that understanding. It would be like trying to translate hieroglyphics without a rosetta stone. Needless to say, there are some very talented people working on this, but it's going to take a breakthrough. This is why, even though Musk says we'll see this brain technology "coming a long way away" (his most recent interview on Kara Swisher's Sway podcast), I suspect this technology will be like many hard R&D problems. It will appear to be moving really, really slowly, until all of a sudden it will move fast.
And of course, if we are able to create an effective interface to our thinking, you can guarantee that all of the sci-fi nightmares about brainwashing will play out among unscrupulous parties (and probably even side effects of well-intentioned ones).
This technical frontier offers huge opportunities to do amazing things for people, and huge opportunities to do evil.
The tech industry, which I've spent almost 3 decades in, tends to over-focus on the opportunities and under-invest in dealing with the dangers. It's a downstream effect of our obsession with, and reward of, growth. Thankfully we have science fiction writers (and academics) to imagine the dark stuff for us all.