Did you ever play that game Lifeboat, where you have one lifeboat and you have to decide who gets to jump into it? You know, priest vs. pregnant mother vs. sailor vs. rich banker vs. yadda, yadda, yadda?
There’s a similar thing called the trolley problem in situational ethics. Well, MIT is working on a database to help self-driving cars play their own version of the who-to-save game. It makes one wonder how we’ll program (teach?) all sorts of machines we will come to rely on, and how those machines in turn will have to program (teach?) ethics to that which they create. On and on… Our ethical codes are handed down to us via our myths and stories, philosophies and laws, traditions and taboos. However, the idealism they aspire toward is often left unexercised in everyday practice. Will AI face that same conundrum? In the article on the project at The Economist we learn that, sadly, cats don’t so well in this process. Not sure about kittens.
Hitchcock made a movie called Lifeboat based on a script by John Steinbeck and featuring the lovely Tallaullah Bankhead. Maybe we need a new version of Lifeboat now that we have ruined the planet. A movie featuring the Terminator, C3PO, The Jetsons’ Rosie, a Dalek and the Lost in Space robot. Maybe they should decide if we humans are worthy of shooting into the stars.