In artificial intelligence and philosophy, the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the claim that the human race will have to get the control problem right "the first time", as a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it after launch.
Though this may be surmountable in our own world, for practical purposes in the PL setting, it's not. The reasons for this are as follows:
- It would be fucking boring