The learning rate of machines is likely to explode after strong AI, while humans will remain, well, humans.
AlphaGo's victory over Lee Sedol is a significant milestone in AI research. The computers are done with competing with us in board games. They won. It's time to enter the real world now.
I believe that:
- Quantum computing is likely to happen before strong AI. We've made concrete progress towards Quantum, consistently, for decades. NSA is already moving to quantum-resistant algorithms and recently issued an advisory that the world needs to start preparing for the oncoming impact of quantum computers. China successfully tested quantum communication and is launching a satellite for further testing.
- Progress in weak AI, doesn't mean progress towards strong AI. The two problems can be completely independent. An analogy is that faster binary computers don't bring us any closer to quantum computers.
- We don't fully comprehend the dangers of strong AI. There is a lot of excitement around AI advances and simply not enough talk about the potential dangers. If biologists, instead of computer scientists, were working on creating a new hyper-intelligent life form then we'd be having a completely different conversation.
It's great to see engineers and investors excited about (weak) AI. Quantum computing, in comparison, is not getting enough attention from the private sector and I'd like to see that change.
More importantly, we need to start preparing to co-exist with intelligent machines and invent ways to protect ourselves from malicious AI. Strong AI resembles alien invasion more than it resembles Amazon's Alexa in your living room: we need to start preparing for that invasion.
Part I of this post was
published earlier. These posts are meant to start a conversation. I'd love to hear your thoughts.