GenAI in Testing: Acceleration Without Abdication
In many software teams today, generative AI is becoming part of everyday engineering work.
Within testing, it is being used to turn requirements into draft test outlines, generate realistic data variations, structure API and automation tests, refactor brittle scripts, and make sense of complex logs. What began as curiosity is now influencing how testing work is approached.
Industry research reflects this shift. The Stack Overflow Developer Survey 2025 reports widespread use of AI tools across development workflows (https://survey.stackoverflow.co/2025/).
At the same time, the Capgemini World Quality Report highlights AI-driven testing and intelligent automation as priorities for quality leaders (https://www.capgemini.com/insights/research-library/world-quality-report/).
AI is shaping both daily testing practice and broader quality strategy. The important question is not whether we use these tools. It is whether we understand their limits.
AI as Support, Not Authority
Generative AI can suggest alternative test conditions, propose data sets, draft automation code, explain unfamiliar system behaviour, and assist with log analysis. Used carefully, it reduces friction and expands exploration.
But it does not understand context.
It does not evaluate risk.
It does not decide what matters most.
Testers do.
AI can produce output. Testers determine whether that output addresses genuine system risk. Are edge cases meaningful? Is automation maintainable? Does the proposed coverage reflect realistic user behaviour? Has bias crept into assumptions?
The fundamentals of testing remain unchanged: risk-based thinking, analytical depth, system modelling and critical reasoning. Working alongside AI does not replace these skills. It makes the gaps more visible when they are missing.

Testing AI Systems Is a Different Discipline
The shift is not only about using AI in testing. Many teams are also validating AI-enabled systems.
These systems behave differently from traditional deterministic software. Outputs may be probabilistic. Behaviour can evolve as training data changes. Bias may appear in subtle ways. Explaining outcomes can be more complex than reproducing them.
Testing in this environment requires additional competence. Understanding data quality, model behaviour, bias detection and the implications of non-deterministic results is increasingly part of the tester’s remit.
The certification schemes developed by the ISTQB reflect this evolution. Specialist pathways now address AI and machine learning testing alongside established foundations in structured test design and risk analysis.
Independent examination for these certifications is delivered globally by organizations such as iSQI GmbH. Further information on certification pathways can be found at https://www.isqi.org.
Certification does not replace experience. It does not manufacture judgement. What it offers is structured knowledge, shared language and a clearer route for professional development in an expanding discipline.
Learn more about Testing with Generative AI and Testing AI systems
A Balanced View of Certification
Certification has long been debated in the testing community. That debate is appropriate.
Testing is contextual. It involves thinking, adapting and responding to nuance. No syllabus can capture every scenario encountered in practice.
However, structure has value. Shared terminology reduces misunderstanding. Defined knowledge areas help organisations benchmark capability. Independent assessment provides transparency in a global profession where skills must be portable.
The aim is not to standardise judgement. It is to support professional growth.

Speed Is Not Insight
AI unquestionably increases speed. Draft artefacts can be produced quickly. Data sets can be generated at scale. Logs can be summarised in seconds.
The danger lies in mistaking output for understanding.
Generating hundreds of test cases is easy. Determining whether they address real risk is harder. Automation can execute rapidly. Interpreting results in context still requires experience.
When strong testing fundamentals are combined with informed AI use, productivity improves without compromising quality. When fundamentals are weak, AI amplifies the weaknesses.
The testing profession has navigated similar shifts before. From manual execution to automation. From sequential delivery to agile development. From monolithic systems to distributed architectures. Each transition required adaptation while preserving core principles.
Generative AI is another transition.
The tools will continue to evolve. The need for informed human judgement will not.
Quality has always depended on professionals who understand risk, context and consequence. That remains true.
The real conversation is not whether AI belongs in testing. It is whether we are developing testers who can use it with discipline and intent.
Author

Debbie Archer
Debbie Archer is Managing Director of iSQI Limited and Vice President for Business Development at iSQI Group. With more than 20 years of experience in learning and development, she has held senior positions, including Director of Global Channel Partners at the BCS, The Chartered Institute for IT. She is actively involved with ISTQB® (International Software Testing Qualifications Board), supporting the development of globally recognised software testing qualifications.
iSQI are a participating in EuroSTAR Conference 2026 as an Exhibitor. Join us at EuroSTAR Conference EXPO in Oslo 15-18 June 2026.

















