Sunday, December 02, 2007

Robots, AI, Ethics and Military Technology


Now, we rarely post articles (or links to) from the US Armed Forces Journal on this blog, but I was sent the link to an interesting one, entitled: Fast Forward to the Robot Dilemma. This talks about the ethical issues involved in placing AI (Artifical Intelligence) systems in control of military technology...

He (Major David Bigelow) concludes: It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework.

Which sound moral framework, I wonder, would that be though? I thought RPE blog readers may find his piece of interest....
Dave

7 comments:

  1. Anonymous9:06 pm

    Can there ever br a sound moral framework? Surely every theory has too many flaws to trust a robot to make decisions? But aren't humans just as untrustworthy as a robot anyway? I have absolutely no idea - just random thoughts!

    ReplyDelete
  2. hmmm.. could the same be said about people having children? is it unethical to have children (fully autonomous) unless endowed with the ability to make independent decisions and able to screen its decisions through a sound moral framework?

    joining in with the random thoughts here....

    ReplyDelete
  3. Anonymous10:10 am

    This makes me think of Issac Asimov's laws of robotics, but then if the robot was being used for military purposes then it would be indirectly harming humans, which would go against the first law. Then again, a 'sound moral framework' surely means something very different to members the armed forces than it does to most other people, because of what it potentially requires you to be able to do.

    ReplyDelete
  4. Anonymous2:15 am

    jason, i read an article today about a blind couple have kids (7!) and they seem to cope alright. wouldn't it be wrong to deny someone of having a child because of a disability? maybe if they had help?
    I'm sticking with my theory that there is no sound moral framework although some may seem to work at first.
    a little off topic & why i am doing this at 2am i don't know!
    more random stuff....

    ReplyDelete
  5. the randomness is good charlie!

    i was only raising the point, I dont think people should be denied the chance to have children if they are disabled, in fact i was thinking of parents unable due to mental reasons rather than physical when i posted but I can see why it could read either way

    ReplyDelete
  6. would moral robot error be the same as moral human error? I think A.I. would learn a lot more from its mistakes then we do. Perhaps we would discover the hidden intentions of our own moral framework. There seems to be a lot of 'the terminator' fear around, when humans get bumped of their worldly throne, but two factors to consider is firstly technology has been killing people for years and is getting quite advanced (UAVs, cruise missiles) is the hand that wields a sword more morally to blame then the hand that types the software for 'killer A.I'? secondly wouldn't it be more of a steady merger to A.I and robotics? already humans spend a huge chunk of time in Cyberspace.
    (http://news.bbc.co.uk/1/hi/technology/4968314.stm)
    I welcome the day when robots are killed instead of humans or that my washing gets done again by a 'mum' robot.

    ReplyDelete
  7. that looks suspiciously like a T1. i think AI should never be allowed to make decisions regarding human life. if the singularity happens then we can kiss our asses goodbye.

    ReplyDelete