Artificial Morality Questions

Who decides on what is ideal?  What is rational?  Aristotle’s ideas of rationality for mankind may be “threatened when humans begin to behave irrationally when their interests are threatened and they begin to have to deal with beings/entities they perceive as different from themselves” (Anderson, 2007). 

Artificial morality is impossible because humans will never be completely ethical.  Anderson (2007) clarifies why Isaac Asimov’s “three laws of robotics” published in his 1976 novel “Bicentennial Man” is an unsatisfactory approach to new ethical challenges facing humans and machines.  Asimov’s offered the following rules for artificial intelligence: “A robot may not injure a human being, or, through inaction, allow a human being to come to harm.  A robot must obey the orders given to it by human beings except where such orders would conflict with the first law.  A robot must protect its own existence as long as such protection does not conflict with the first or second law” (p. 478).   Anderson (2007) explains how artificial morality will be problematic for humans to program and adhere to Asimov’s rules.  In addition, it would be a hard sale for humans to allow machines to ethically advise them (Anderson, 2007, p. 477-478).

Who decides on how the supreme prophet bot interprets past experiences?  To address these issues we must first decide on a universal ethical theory and apply these theories consistently.  As the rapid progression of new technologies continues, huge concerns and challenges are facing societies in regards to artificial morality. Will future man happily abide robotic intelligence or will the threat of intelligence higher then our own force mankind to destroy this intelligence?  Will robots be created in our own image:  imperfect, dangerous, and unpredictable? 

Anderson, S. L. (2007). Asimov’s “three laws of robotics” and machine metaethics. Ai & Society, 22(4), 477–493. doi:10.1007/s00146-007-0094-5

Coeckelbergh, M. (2011). Can we trust robots? Ethics and Information Technology, 14(1), 53–60. doi:10.1007/s10676-011-9279-1

Dodig Crnkovic, G., & Çürüklü, B. (2011). Robots: ethical by design. Ethics and Information Technology, 14(1), 61–71. doi:10.1007/s10676-011-9278-2

Social Robots NOVA  


About instructionaltechnologist101

Instructional Technologist 1 to 1, Avid change agent, Mac Enthusiastic, Implemented K12 1:1 program, managed offsite curriculum center in community museum, learner, PhD student in Educational Technology at University of North Texas. The future is now!

Posted on November 21, 2012, in Uncategorized. Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: