There are two things that really get me about the whole “killar AI of the near future” scare.
The first is that I think the whole idea of generalized AI able to defeat us in any task not specifically outlined for it is laughable. Robots have trouble walking - something that we mastered when we were 2 years old. Robots require maintanence, power sources, etc. They have trouble determining if a picture is of text or just random bits of data.
It’s true that there are examples that seem to skirt these rules, but that’s mostly because engineers spend enormous amounts of time and resources running on super computers the most elaborate rules we can think of to get around them, and even then we still have large error rates. “This could be batman, or it could be one of 14 batman look-alikes, 3 of which are asian woman, 1 of which is a box with a batman picture on it”.
The second thing that strikes me about AI is that as soon as people say “well, of course we’re not talking about truly sentient computers that actually want to kill us”, then we water down the argument to something that’s obvious and also less dangerous. As long as drones aren’t making decisions on who to attack, and the owners of those drones take full responsibility, then drone warfare is actually a good idea for the simple reason that it gives your side a better chance of winning.
No one creates a drone that can intentionally kill it’s own creator. If a creator does that, they’re stupid, and they lose the war anyways because that drone is going to be shot out of the sky, knocked out with anti-robot weapons, run out of fuel, etc. There is nothing to fear from this aside from the standard horrors of war. It’s not going to destroy the human race, no matter what Elon Musk says.
But what if we weaken this one more time, and just say “it gets hacked and controlled by a rogue agent”. That is indeed bad. It’s also not a problem of killer robots, AI, or anything else even remotely similar. And it’s also something that is solved by the motivating factors of the military entities that control these robots. We don’t need international law for that, we just need a healthy sum of paranoia and security when designing these things. Don’t use SSL1.0 on WEP wireless to control these things.