January 19, 2008

Robots lie. What about software ethics?

In a recent article Discover reports that in an experiment with genetic code and robots researchers from the Lausanne university were able to produce robots with the ability to lie. The robots, having to cope with a "food or poison" question, were able to signal poison as food to their "brothers" and the "eat" the real food while the other robots were poisoned.
While the article is missing technical details and the paper isn't available on Dario Floreano website we can easily guess that it's all about the fitness function. A fitness function is (citing wikipedia) a particular type of objective function that quantifies the optimality of a solution.

The way we choose to measure fitness will decide how an individual in a genetic algorithm behave. It might be The Selfish Gene, or not. It's up to the fitness function. In the same experiment scientists found heroes, embracing sacrifice to save the other robots.

Why is this so interesting? Because we are going to see more and more genetic software in the enterprise, expecially in decision support systems. What we choose as a fitness function will reflect in the output: will they fire someone or hire more women?
Software is going to be lesser "objective" in the future. Complexity is a factor: nowaday we can't tell "why" a Neural Network works the way it does - we can understand the output, but we can't really be sure.
We'll have to rethink about the way we interact with software and understand it. Maybe we should start thinking about ethics, not the Asimov way but in a new, business oriented, way?