Track Talk, T8

Impending Sense of Loom – The Past and Future of Automation

Ester Greenwood

11:30 - 12:15 CEST, Tuesday 16th June

AI is no longer science fiction. It’s science fact. Many people are talking about AI tools that are capable of testing but there is less discussion around testing AI itself. There is also a lot of talk about ethics, but what does that mean? Ethics are just a set of ‘morals’ grouped into a framework.

The problem here is that morals are different in people, communities and countries, so by definition, ethics can vary too. Understanding these different moral values such as deontology, utilitarianism and virtue ethics and how they differ between people helps us to understand the complexity of our new world.

This new understanding leads to a need to rethink our role as testers. As AI systems move from tools and processes to decision-makers, the more traditional boundaries of testing are no longer enough.

We can’t simply validate inputs and outputs now that algorithms are shaping outcomes with ethical, cultural, and psychological implications. Drawing on insights from my book Psychology of AI Decision Making (BCS Publishing), this talk shows why testers must go beyond technical skills and actively train in psychology.

That means understanding not just bias and decision-making shortcuts, but also how sociocentric issues, cultural values, and sociotechnical dynamics influence AI behaviour. Testers must also recognise the risks of anthropomorphism, where people unconsciously attribute human motives to systems that are only running code.

Testing AI is no longer just about functionality and checking inputs and outputs. It’s now about creating training data to set up systems in the first place and anticipating unintended consequences, surfacing hidden nudges and the butterfly effect. There also needs a ‘human in the loop’ process to provide trust by constant reviewing and resetting of internal patterns to align with human values.