Are we sleepwalking into an artificial intelligence moral maze?

By david

I am concerned that society is in danger of making the same errors over the role of AI systems as we have over climate change. The moral issues surrounding the implementation of AI systems are huge and science needs to start engaging with society now in order for the public to be able to make informed decisions about when and where they want AI systems to be part of their lives. The dilemma of the self-driving car and the decisions the AI makes is just the tip of the iceberg. From making decisions on pulling the trigger through to knowing exactly what your every move is, AI moral issues need to be formulated, articulated and debated now.

As a society we are starting to rely on the power of algorithms for managing our lives. Unfortunately there is very little understanding by the public and quality control of the data that the algorithms are trained on. This trend has the power to undermine democracy as groups without any accountability will have and do have the ability to exploit society’s lack of knowledge to manipulate society to their own end.
We already know how an early bias in the development of car seat belts has resulted in the deaths of many more women and this resulted without the aid of modern algorithms. The problem with the data that algorithms are trained on is that very few people will have the ability to spot any developing biases in the decisions made by the algorithms and also detect where biases have been introduced to produce results which benefit the knowledgeable few.

It is vitally important that we identify the areas in society where we need to manage the implementation of algorithm based systems. In these areas AI systems should be certified as fit for purpose. Naturally the expertise to do this will have to be developed over time, but it should not stop the process of identifying the AI systems and the data that they are trained on being available for scrutiny.

Sign me up