The pace that “artificial intelligence” (AI) is being incorporated into software testing products and services creates immense ethical and technological challenges for an IT industry that’s so far out in front of regulation, they don’t even seem to be playing the same sport.
It’s difficult to keep up with the shifting sands of AI in testing right now, as vendors search for a viable product to sell, and most testing clients I speak to these days haven’t begun incorporating an AI element to their test approach and frankly, the distorted signal coming from the testing business hasn’t helped. What I’m hearing from clients are big concerns around data privacy and security, transparency on models and good evidence, and the ethical issues of using AI in testing.
I’ve spent a good part of my public career in testing talking about risk, how to communicate it to leadership, and what good testing contributes to that process in helping identify threats to your business. So, I’m not here to tell you “No” to AI in testing but talk about how KPMG is trying to manage through the current mania and what we think are the big rocks we need to move to get there with care and at pace.
KPMG AI Trusted Framework
As AI continues to transform the world in which we live – impacting many aspects of everyday life, business and society KPMG has taken the position to help organizations utilise the transformative power of AI, including its ethical and responsible use.
We’ve recognized that adopting AI can introduce complexity and risks that should be addressed clearly and responsibly. We are also committed to upholding ethical standards for AI solutions that align with our values and professional standards, and that foster the trust of people, We’ve recognized that adopting AI can introduce complexity and risks that should be addressed clearly and responsibly. We are also committed to upholding ethical standards for AI solutions that align with our values and professional standards, and that foster the trust of people, communities, and regulators.

In order to achieve this, we’ve developed the KPMG Trusted AI model as our strategic approach and framework to designing, building, deploying and using AI strategies and solutions in a responsible and ethical manner so we can accelerate value with confidence.
As well, our approach to Trusted AI includes foundational principles that guide our aspirations in this space, demonstrating our commitment to using it responsibly and ethically:
Values-driven
We implement AI as guided by our Values. They are our differentiator and shape a culture that is open, inclusive and operates to the highest ethical standards. Our Values inform our day-to-day behaviours and help us navigate emerging opportunities and challenges.
Human-centric
We prioritize human impact as we deploy AI and recognize the needs of our clients and our people. We are embracing this technology to empower and augment human capabilities — to unleash creativity and improve productivity in a way that allows people to reimagine how they spend their days.
Trustworthy
We will adhere to our principles and the ethical pillars that guide how and why we use AI across its lifecycle. We will strive to ensure our data acquisition, governance and usage practices upholds ethical standards and complies with applicable privacy and data protection regulations, as well as any confidentiality requirements.
KPMG GenAI Testing Framework
The KPMG UK Quality Engineering and Testing practice has adopted the Trusted AI principles as an underpinning model for our work in AI and testing. We are focusing our initial GenAI Testing Framework on specific activities to extend the reach of testers while allowing risk management to be insight led and governance to be human centric. This is accomplished by through incorporating our principles into the architecture including:
Tester Centric Design
The web-hosted front-end is where testers can securely upload documents, manage prompts, and access AI generated test assets to use or modify. Testers can create and modify rules allowing consistent application and increased control of models and responses.
Transparent Orchestration
The orchestration layer sits at the heart of the system and manages the flow of data between different components to ensures seamless execution while providing transparency on the models being deployed.
Secure Services
The Knowledgebase contains the fundamental services powering the AI solution and storing input documents, test assets, and reporting data as well as domain and context specific information you design.
Software testing is essentially a function of risk management and integrating AI into your test approach presents multiple challenges for your test team as well as implications for programme governance. Model accuracy, intellectual property rights and IP leaks, data quality issues with accuracy, drift, or loss are all real internal risks to your operations to ensure you are testing the right things at the right time. Externally, your governance can run into copyright infringements or privacy violations, both which have implications for your brand let alone the potential harm done to vulnerable communities through model bias which makes using an ethical framework for designing and implementing AI in testing even more important.
There remains a great deal to be worked out regarding AI in software testing and we are just at the discovery phase of what it can – and should do for system quality. Whatever the future holds, your strategy has to be grounded in principles and values that reflect an ethical approach including putting the tester at the centre of process, transparency of models and data, and safety and security your primary objective.

Keith Klain
Keith Klain is a Director of Quality Engineering and Testing at KPMG UK and is frequent writer and speaker about the software testing industry.
He leads software quality, automation, process improvement, risk management, and digital transformation initiatives for retail banking and capital markets clients. With extensive international experience in software quality management and testing, he has built and managed teams for global financial services and consulting firms in the US, UK, and Asia Pacific.
He is passionate about increasing the value of technology by aligning test strategies to business objectives through process efficiency, effective reporting, and better software testing. He is also an advocate for workforce development and social impact, having designed and delivered technology training curriculum for non-profits to create technology delivery centres in disadvantaged communities. He has served as the Executive Vice President of the Association for Software Testing and has received multiple awards and recognition for his contributions to the software testing profession and diversity in tech.
KPMG are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.