This week, I wrote a piece on 3 Quarks Daily titled "Between Golem and God: The Future of AI". It deals with the relationship between flexibility and autonomy in AI, and whether this could pose an existential hazard for humanity.
I have tried to develop some detailed ideas in the article, but here's the gist of my argument:
Part I:
Part II:
Part III:
Part IV:
I have tried to develop some detailed ideas in the article, but here's the gist of my argument:
Part I:
- True general intelligence requires a system to be fully autonomous, embodied (physically or virtually), and capable of self-motivated learning through autonomous exploration.
- An autonomous system with its own motivations will develop its own values.
- An embodied system with its own values and motivations will not always do what we want. It will not be a servant.
- We cannot have it both ways by asking for systems that are fully intelligent and fully under our control.
Part II:
- We will gradually get dependent on more and more intelligent systems.
- This will inevitably push us to build even more intelligent systems with greater autonomy.
- The more autonomy intelligent systems have, the harder it is for us to know what their real capacities and possibilities for growth are.
- Thus, it will be impossible for us to determine the boundary where we must stop adding autonomy.
- Even if we could assess that, it will be impossible to stop there because someone somewhere will not. Many already think that there's no reason to stop.
- The growth to full autonomy is thus inevitable – unless it really turns out to be impossible (as suggested by Fjelland).
- Fully autonomous machines will, in principle, be capable of rapid growth in unpredictable directions.
- Thus, super-intelligent machines are inevitable – unless they are impossible.
- We have no real idea whether they are impossible, and no way to know.
Part III:
- The argument that a super-intelligent machine will be too wise to enslave humans is just hopeful speculation.
- The idea (e.g., from Hawkins) that we can put make super-intelligent machines benign by controlling what motivations we put into them is absurd. They will develop their own motivations and values.
- We have no idea what values and purposes fully autonomous, super-intelligent machines will have. We may not even have a conceptual framework for them.
Part IV:
- As machines become more intelligent and autonomous, they will be more and more out of our control
- As machines become more intelligent and autonomous, we will be more and more dependent on them
- By the time they are intelligent and autonomous enough to think of controlling us, we will already be virtually under their control.
- Whether an active Skynet/Ultron-style takeover by machines will occur or not in the distant future is completely unknowable.
- Whether it's already underway is also completely unknowable.
Why Blackjack has the lowest house edge in the house
ReplyDeleteIt's true that Blackjack has a house edge of -110,000 deccasino (100% chance หารายได้เสริม of winning the 바카라 game) so if you bet that way you will win. In this