AI and our scary killer robot distopian future

Hi, just saw this:

Apart from the rather alarming bit that states “Those in favour of killer robots …” (Err, what? There are people in favour of rogue remorseless killing machines?) What do we reckon?

Will mankind Terminate itself with it’s own creation?

3 Likes

I think we are some way away from killer robots in the sense shown in the picture above: human like robots are difficult to achieve as a technology AI is still having difficulties with apparently basic ideas like walking. That said visual recognition seamed an almost impossible problem just a few years ago.

I do see an application where today the weapon is a machine such as a plane or a tank in removing the human from the equation. You can see the appeal for the military: you get to impose death and fear without risking your own personnel. This already exists to an extent in that surveillance and weaponised drones don’t carry a pilot but are often controlled remotely today. These already have the potential benefit that as well as protecting the pilot from being shot down they can perform manoeuvres which a person would be unlikely to survive because the g-forces would be fatal.

No lets assume we can achieve this performance using AI what does this mean?

  • When you lose a plane you only have to explain it to the accountants and not the pilots family. Because you are risking your own countries boys and girls the moral threshold for going to war may be reduced in the public perception.
  • Good pilots cost a fortune and a lot in time to train once the AI has been developed it can be put into the missile, plane, etc. and you have a ready attack force.
  • The AI can withstand much higher forces so attack weapons can complete more ext ream manoeuvres.

Are these benefits? That’s up for us each to decide.

I would argue that you can’t achieve peace through fear, only love, and the only way that is going to happen is by making the barriers to war higher so we instead turn to talking to each other trying to understand our own needs and desires and how we can achieve those while recognising the needs and desires of others.

I was watching a DVD called Hapyism by Adam Hills last night in which he talks of meeting the Dalai Lama and he attributes this quote to him.

Everybody talks about peace when they should be instead be discussing non violence:
You can fight for peace, but you can’t fight for non-violence.

Adam Hills is an Australian comedian who now lives in the UK. If you don’t know his work you should check it out especially ‘The Last Leg’ a program he hosts on channel 4.

But yes, I am concerned about anything that makes war less of a scary concept, even if the prospect of robots taking over humanity is as yet a very long way away.

1 Like

Interesting question.

I don’t the risk is of machines becoming sentient and decided to get rid of those pesky humans, but instead that a hostile actor (such as Kevin James…ba dum tish!) will hack the security systems surrounding a weapon and use it in an attack. This is particularly pertinent given the seeming growth in cybercrime and hacking.

I’m sure they would come with two years of security updates. :smile:

2 Likes

LOL. If they are powered by Android…maybe. :slight_smile:

Maybe there will be a community effort for support! :wink:

This asks an important question. If we are to develop killer robots should the code be open source?

  • If it is open source then more people will get their eyes on the code so it is likely to be written in a more security conscious way. Though the hostile hacker will have seen the code so know exactly what they are attacking and in a position to test any attacks on system security off-line before raising the attention of the security services.

  • If it’s closed source then nothing stops Evilcorp, or whoever else creates the software, putting a back-door in the software which they can activate at any time they choose.

I’m against the development of AI assisted killing machines. Not because I think there is any immediate prospect of machines becoming sentient and deciding they would be better without us (Terminator style).

My worry is that such technology increases the likelihood that either state or other rogue group would be more likely to attack.

2 Likes

I think we are just about there. Maybe not walking around robots. But drones run autonomous already. Just throw an AI on top and let them make some decisions and whamo! Winner, winner, chicken dinner!

I think the bigger worry right now is not them killing us, but us letting them make some sort of tragic mistake, killing a bunch of innocents, or something of the like and escalating whatever situation they are a part of or bringing another party into the fray due to our mistake.

I really don’t think we can stop someone from retaliating when our answer to things would be, “sorry, it was just a glitch, won’t happen again.”

Cheers,
Tim

Interesting article on NSA’s targeting people for drone strikes: https://theintercept.com/2014/02/10/the-nsas-secret-role/

FYI: we will be discussing this on Bad Voltage 2x16 which is released on Thursday.

Thanks, @paulgault for the suggestion!

1 Like

@parzzix raised a really point that I would be very interested in hearing discussed:

I’ve shared the concern, especially since Tay (like really, robots gonna do what robots gonna do no matter what we think…prevention is key). Here’s a quick “backgrounder” on ways those mistakes can happen for your consideration.

@paulgault had a kickass idea for a show and I’m super excited now ! :smiley:

There are two things that really get me about the whole “killar AI of the near future” scare.

The first is that I think the whole idea of generalized AI able to defeat us in any task not specifically outlined for it is laughable. Robots have trouble walking - something that we mastered when we were 2 years old. Robots require maintanence, power sources, etc. They have trouble determining if a picture is of text or just random bits of data.

It’s true that there are examples that seem to skirt these rules, but that’s mostly because engineers spend enormous amounts of time and resources running on super computers the most elaborate rules we can think of to get around them, and even then we still have large error rates. “This could be batman, or it could be one of 14 batman look-alikes, 3 of which are asian woman, 1 of which is a box with a batman picture on it”.

The second thing that strikes me about AI is that as soon as people say “well, of course we’re not talking about truly sentient computers that actually want to kill us”, then we water down the argument to something that’s obvious and also less dangerous. As long as drones aren’t making decisions on who to attack, and the owners of those drones take full responsibility, then drone warfare is actually a good idea for the simple reason that it gives your side a better chance of winning.

No one creates a drone that can intentionally kill it’s own creator. If a creator does that, they’re stupid, and they lose the war anyways because that drone is going to be shot out of the sky, knocked out with anti-robot weapons, run out of fuel, etc. There is nothing to fear from this aside from the standard horrors of war. It’s not going to destroy the human race, no matter what Elon Musk says.

But what if we weaken this one more time, and just say “it gets hacked and controlled by a rogue agent”. That is indeed bad. It’s also not a problem of killer robots, AI, or anything else even remotely similar. And it’s also something that is solved by the motivating factors of the military entities that control these robots. We don’t need international law for that, we just need a healthy sum of paranoia and security when designing these things. Don’t use SSL1.0 on WEP wireless to control these things.

2 Likes

I think One problem is what happens is how to enforce this. I really do not want to see self aiming machine guns. Also why would these have to fly.

I mean something as simple as a self driving toyota hilux with a self aiming machine gun on it could be a major change for warlords in Africa. Thing is would this follow ai have legitime uses in say security cameras. This I think is quite scary.

1 Like