Disclaimer: this article is 100% human effort, no LLMs were leveraged while writing it.
Since the end of 2022, people have been using LLMs, first for fun, and then for work related activities. 2023 was the year of doubts, with the majority of people calling this period the “AI-hype Era”. Many were afraid to even try out any of these LLMs. Then in 2024 we started hearing about successful results of AI adoption programs and saw AI-native solution adoption providing valuable assistance, primarily in coding tasks.
In 2025, the scenery changed again. We started hearing more about the pitfalls of GenAI transformation projects, the emerging risks and challenges, and how one could potentially bridge these gaps and avoid hurting their business. Most people were still cautious, but they were also curious.
Moving on from simply using chatbots, the natural next step was leveraging a code
assistant inside an IDE. This is a great way to boost your output in test automation, but
without the right context, i.e. proper test data and agentic knowledge of your enterprise systems, the code produced could turn out to be generic and not tailored to what you need.
The first answer to that problem was RAG (Retrieval Augmented Generation), and then more recently the MCP (Model Context Protocol). The former enables you to leverage additional data – custom embeddings and datasets – and effectively expand what your LLM can access. The latter provides communication between LLMs and Agentic systems with external systems such as project or test management tools.
Although the tooling is important, the human aspect needs to be considered too, and in
fact, is the biggest factor in successful AI transformation programs and adoption.
Now in 2026 we have an even clearer picture of implementing AI-native business and
operational solutions across almost every industry. The first thing to highlight is that the number one discipline where we see success in AI adoption is:
Testing!
This may come as a surprise to you, my fellow Testers, as usually Testing often seems to be an afterthought. Dev and DevOps continue to prioritize coding and delivering fast value. So how come Testers are the forerunners now?
The recipe is simple: Testers have a critical, methodical, and investigative mindset. They
provide unfiltered feedback and really care about the products they are working on. Testers also have a deep understanding of software and are comfortable with new, possibly unfamiliar technologies, both as users and as technical professionals.
Let’s break down, how that aligns with GenAI:
- Prompt Engineering became a globally recognized role in 2023, and it became mandatory skill to pick up for teams working in an Agentic SDLC
- For prompts, you need to be descriptive and have a thorough understanding of what you need the LLM to accomplish or provide
- Testers already have the analytical mindset due to requirements analysis
- Testers already understand what the end-users and the business are looking for
- Testers already work closely together with developers
- Test automation code needs to be integrated in CI/CD pipelines, and quality gates need to be defined at different stages of the delivery
EPAM realized that Testers are the Swiss Army Knives of the Software world. Testers make the perfect Prompt Engineers, as they possess all the required prerequisites to pick up the necessary new skillsets fast to excel in this AI world. And then, they can be the perfect catalysts and support system for pursuing broader AI adoption across an organization.

Example Agentic SDLC phases, AI assistants and benefits of using AI
Agentic SDLC is all about bringing AI-assistance to every stage of development, QA and operations, be it requirements analysis, user story creation, developer’s review of user stories, test case definition, code change impact analysis, test orchestration, or vibe
coding of product code and test automation code.
For each of those tasks, a pipeline of AI agents can provide task level productivity gains. The more use cases you identify to augment with AI, the more overall team productivity gains can be realized. For that, you need to investigate applicable disciplines holistically and ensure that each team member is engaging with the implemented AI solution while developing mastery (Note: A number of AI orchestration and collaboration platforms, like EPAM’s EliteA, have built-in tools to help managers track adoption and skill growth). That’s when adoption can accelerate, and your teams can together ensure an impactful ROI (return on investment) on GenAI adoption programs.
That’s when QA people come into the picture again: We like to set up QA metrics to see trends and be able to course correct when the ship is navigating in the wrong direction. AI solutions are software solutions as well. Usage of these needs to be carefully observed, and course corrected at times. Testers know how to do that and can help teams and organizations avoid waste through proactive, predictive, and preemptive monitoring.

Example agentic eco-system leveraging MCP and ELITEA’s system connectors
To provide better insight, let us give you numbers from one of clients, an insurance
payment platform provider. We measured up to 90% task-level productivity gains on
performance tests results analysis, and on requirements analysis. Test case generation
and orchestration provided 75%, while user story and user guide creation provided 67%
gains. Agents assisted vibe coding enabled developers and test automation engineers to
spend around 40-45% less time on coding.
These numbers may look high, but don’t forget that these were task-level gains. The teamlevel gains were between 27.8% and 31.8%, as not all the tasks of business analysts,
developers, and testers were AI-assisted. As highlighted above, the more use cases
augmented with AI, the more disciplines adopting those solutions, the higher the overall productivity gains are.
Overall, there is an incredibly positive light and exciting opportunity for our beloved
discipline in this new era. But it’s important that You, as a Tester, start adapting to and
working with this new style of delivery, or you risk being left behind. If you are unsure where to start or how, then reach out to us, we are always happy to help.
Visit EPAM at booth 15 at the EuroSTAR conference. Come on over and say hello, and let’s seize these new AI opportunities together!
https://www.epam.com/services/engineering/quality-engineering
Author

Péter Földházi Quality Architect, AI & Game QA Consulting, North America
Péter was first involved with QA as a beta tester of DOTA in 2006. Since joining EPAM in 2012, he moved towards test automation and is currently working in the USA as a Quality Architect.
He is leading Game Testing Consulting and GenAI adoptions in the Americas.
Péter has authored two ISTQB syllabi: Test Automation Engineering & Test Automation Strategy. He also invented two test automation methodologies: the Flow Model Pattern and the Tri-Layer Testing Architecture, the latter published as a white paper by the PNSQC. Péter has been one of the review board members of the HUSTEF since 2015.
Péter is a regular keynote and tutorial speaker on conferences such as STARWEST, STAREAST, and SauceCon. He used to be a guest lecturer at 3 Budapest based universities: Óbuda, Pázmány and the ELTE. Brewing beer and planting chilis are some of his hobbies.
Editor: Ted Weil – Marketing Manager, TestIO & EPAM Testing Practice
EPAM are Exhibitors in EuroSTAR 2026. Join us at EuroSTAR Conference in Oslo 15-18 June 2026.
























