• Skip to main content
Year End Offer: Save 30% + Teams Save Up to an Extra 25%!

EuroSTAR Conference

Europe's Largest Quality Engineering Conference

  • Programme
    • Programme Committee
    • 2025 Programme
    • Community Hub
    • Awards
    • Social Events
    • Volunteer
  • Attend
    • Location
    • Highlights
    • Get Approval
    • Why Attend
    • Bring your Team
    • Testimonials
    • 2025 Photos
  • Sponsor
    • Sponsor Opportunities
    • Sponsor Testimonials
  • About
    • About Us
    • Our Timeline
    • FAQ
    • Blog
    • Organisations
    • Contact Us
  • Book Now

Aishling Warde

How Gen AI is Empowering Testers to Work Smarter, Not Harder

May 5, 2025 by Aishling Warde

With software becoming more complex and release cycles getting shorter, traditional testing methods are struggling to keep up. That’s where Generative AI (Gen AI) comes in. Instead of spending hours writing test cases or fixing broken scripts, teams can now use AI-powered tools to create tests, adapt to changes, and catch issues earlier—all with less manual effort.

Gartner predicts more than 80% of enterprises will have used generative AI APIs or deployed generative AI-enabled applications by 2026.

But this isn’t about replacing testers. It’s about making their lives easier, helping them focus on what matters: building faster software. Let’s look at how AI-driven testing is changing the game and what it means for QA teams today.

The Problems with Traditional QA

QA testing has long been a challenge for software development teams. While manual testing provides detailed human insight, it is slow, labor-intensive, and prone to human error. Although automated testing helps speed things up, it comes with its own set of issues, particularly in terms of script maintenance and adaptability.

  • Time-consuming test creation and execution – Writing test cases from scratch is slow and requires significant effort. Running tests across different environments and devices adds further delays.
  • Frequent script failures – Automated test scripts often break when applications undergo minor UI or functionality changes, leading to high maintenance efforts.
  • Lack of scalability – As applications grow in complexity, maintaining comprehensive test coverage becomes difficult. Manual testing struggles to scale, and automated testing requires extensive upkeep.
  • Reactive bug detection – Traditional testing often identifies defects late in the development cycle, leading to costly fixes and delays.
  • High operational costs – The need for large QA teams, expensive testing tools, and ongoing maintenance increases the overall cost of software development.
  • Limited test coverage – Manual and traditional automation approaches often miss edge cases or complex user interactions, increasing the risk of undetected bugs in production.

How Gen AI Transforms QA Testing

The limitations of traditional testing often result in slower development cycles, higher costs, and an increased likelihood of defects reaching end users. To stay competitive, organizations need smarter, more adaptive testing solutions—this is where Gen AI makes a difference.

  • Automated Test Generation
    Gen AI can analyze user stories, requirements, and past test data to automatically create test cases. This reduces the time testers spend writing scripts and ensures comprehensive test coverage. AI-generated tests can even include edge cases that might be overlooked manually.
  • Self-Healing Test Scripts
    One of the biggest pain points in automated testing is script maintenance. When an application’s UI changes, traditional automation scripts break. AI-powered tools detect these changes and automatically update test scripts, minimizing manual intervention.
  • Smarter Defect Detection
    AI doesn’t just run tests—it learns from past failures. By analyzing historical test data, Gen AI can predict where bugs are likely to occur, helping teams focus their efforts on high-risk areas. This means catching issues before they reach production.
  • Natural Language Test Execution
    With AI-based test agents that understand Natural Language Processing (NLP), testers can write test cases in plain English instead of coding them. The AI converts these descriptions into automated test scripts, making test automation more accessible to non-technical team members.
  • Faster Regression Testing
    Automating regression testing is crucial for agile teams. Gen AI enables continuous testing by quickly running thousands of test cases, providing real-time insights and reducing release cycles.

The Future of AI in Software Testing

AI-driven testing is evolving rapidly, and its adoption is expected to grow significantly in the coming years. According to market.us, the global AI in software testing market size is expected to be worth around USD 10.6 Billion by 2033, from USD 1.9 Billion in 2023, growing at a CAGR of 18.70% during the forecast period from 2024 to 2033.

Here’s what we can expect in the future:

  • More advanced AI-powered test agents – AI test bots with enhanced NLP capabilities will allow even non-technical users to create and execute automated tests with minimal effort.
  • AI-driven predictive testing – AI will analyze historical defects, system logs, and code changes to anticipate where bugs are likely to occur, allowing teams to focus their testing efforts more effectively.
  • Increased adoption of self-healing tests – Self-healing scripts can automatically adapt to UI changes, minimizing the need for maintenance efforts and manual intervention..
  • Seamless AI integration with DevOps pipelines – AI-driven testing will become a standard component of CI/CD workflows, accelerating software releases while maintaining high quality.
  • Hyperautomation in QA – Combining AI with robotic process automation (RPA) and machine learning will create highly efficient, fully automated testing ecosystems.
    As AI continues to improve, software testing will become more autonomous, intelligent, and efficient. Testers will shift their focus from repetitive execution to strategic decision-making, ensuring that AI complements human expertise rather than replacing it.

Gen AI Isn’t Here to Replace Testers—It’s Here to Empower Them

One of the biggest concerns surrounding AI-driven testing is the fear of job displacement. However, the reality is quite the opposite. AI is designed to amplify human capabilities, not replace them. Testers play a critical role in quality assurance, and AI is simply a tool to help them work smarter.

Instead of spending hours on repetitive test execution and debugging broken scripts, testers can now focus on exploratory testing, usability evaluation, and strategic test design. AI helps remove bottlenecks, speeds up the testing process, and allows teams to shift their efforts toward more valuable tasks.
Testers are no longer just bug finders; they are quality enablers. AI allows them to do more in less time, ensuring that software is not only functional but also user-friendly, accessible, and secure.

The Next Step in AI-Powered Testing

For teams looking to embrace AI-driven testing, tools like Kane AI offer a game-changing approach. As the world’s first AI-native QA Agent-as-a-Service platform, Kane AI simplifies test generation, automation, and debugging through natural language. By integrating seamlessly with existing workflows, it helps teams create resilient, scalable tests with minimal manual effort—empowering testers to focus on quality rather than maintenance.

The future of testing belongs to teams that adopt AI as a collaborative partner, leveraging its strengths while focusing on delivering high-quality and user-centric software. As Karim Lakhani said: “AI won’t replace you. But someone using AI will.” The key is to adapt, innovate, and lead with AI—because the future of testing isn’t just automated, it’s intelligent.

Author

Mudit Singh

VP of Growth & Product, LambdaTest



LambdaTest are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Test Automation Tagged With: software testing conference

The Hidden Crisis in Software Quality: Why Unit Tests Aren’t Enough (And What We’re Learning From 100+ Companies)

April 30, 2025 by Aishling Warde

Traditional quality assurance is failing. Despite companies investing millions in testing infrastructure and achieving impressive unit test coverage – often exceeding 90% – critical production issues persist. Why? We’ve been solving the wrong problem.

The Evolution of a Crisis

Ten years ago, unit testing seemed like the silver bullet. Companies built extensive test suites, hired specialized QA teams, and celebrated high coverage metrics. With tools like GitHub Copilot, achieving near-100% unit test coverage is easier than ever. Yet paradoxically, as test coverage increased, so did production incidents.

The Real-World Testing Gap

Here’s what we discovered at SaolaAI after analyzing over 100 companies’ testing practices:

  1. Unit tests create a false sense of security. Teams mock dependencies and test isolated functions, but real-world failures occur at system boundaries.
  2. Microservice architectures exponentially increase complexity. A single user action might traverse 20+ services, creating millions of potential failure combinations.
  3. The “No QA” movement, while promoting developer ownership, has inadvertently reduced comprehensive testing.

The E2E Testing Paradox

End-to-end testing is essential for verifying that complex systems function seamlessly, yet companies struggle with major obstacles. Setting up E2E environments can take months, while maintaining test data often turns into a full-time job. Integrating these tests into CI/CD pipelines requires specialized expertise, adding another layer of complexity.

On the technical side, flakiness remains a persistent issue, with failure rates reaching 30-40%. Browser updates frequently break test suites, while asynchronous operations and timing inconsistencies introduce further instability. These challenges make E2E testing notoriously difficult to manage.

Beyond the technical barriers, cultural resistance slows adoption. Developers often see E2E testing as solely QA’s responsibility, while product teams prioritize feature development over test reliability. When test suites fail, they are frequently ignored or abandoned rather than fixed, leading to gaps in test coverage and overall software quality.

The AI-Driven Future

Fortunately, modern solutions are emerging that leverage AI to revolutionize testing: from automated test generation based on user behavior, self-healing tests that adapt to UI changes to Intelligent test selection to reduce runtime. The future with AI is bright and looks promising.

The Way Forward

Quality isn’t just about test coverage – it’s about understanding how systems behave in production:

  1. Shift from code coverage to interaction coverage
  2. Integrate observability with testing
  3. Use ML to predict failure scenarios
  4. Automate maintenance of test suites

For too long, we’ve treated quality as a coding problem. It’s time to recognize it as a data problem. By combining AI, machine learning, and traditional testing approaches, we can finally bridge the gap between unit test success and production reliability.

The next evolution in software quality isn’t about writing more tests- it’s about making testing intelligent enough to match the complexity of modern applications.

This is the challenge that inspired SaolaAI: making quality as sophisticated as the systems we’re building. The question isn’t whether AI will transform testing, but how quickly companies will adapt to this new paradigm.

Author

Arkady Fukzon

CEO and Co-Founder, SaolaAI

Saola are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: EuroSTAR Conference

Your biggest load testing challenge is adoption—this is where your ROI comes from

April 28, 2025 by Aishling Warde

Load testing is not a technical challenge. It’s not about having the right methodology. At least, not at first. The real challenge? Adoption.

Even if you have the best expertise, you won’t see a major ROI unless enough people in your organization are committed to performance testing.

Adoption beats everything else

Think of load testing like preparing for a marathon. Which training plan would you trust more?

  • Option 1: Intensive training on your own, two weeks before the race.
  • Option 2: Small, consistent team training runs, six months in advance.

Of course, the second option wins. Yet, many organizations fail to spread adoption of load testing because they get stuck on:

  • Lack of time
  • Lack of skills
  • Lack of awareness
  • Lack of prioritization
  • Lack of budget

And if you’ve ever tried to solve these one by one, you already know: it doesn’t work. Because adoption is not a tactical problem—it’s a cultural shift.

3 key moves to drive load testing adoption

To turn load testing into a company-wide practice, focus on three steps:

  1. Shift left: Make it possible to test at any time
  2. Scale vertically: Start small, but build reusable components
  3. Scale horizontally: Make load testing everyone’s job

Let’s dive in.

Step 1: Shift left—make it possible to test anytime

Here’s a hard truth: Load testing is no one’s full-time job.
That means it’s usually the first thing to get cut when deadlines are tight.
The best way to fight this? Make it possible to test at any time, not just at the end of a project. This is what’s called “shift left”—running tests early in development, not just before release.

When choosing a load testing tool, ask yourself:

  • Does it integrate well into our CI/CD pipeline? (Jenkins, GitLab, CircleCI, Travis CI, Azure DevOps, etc.)
  • Does it connect with our project management tools? (Jira, etc.)
  • Does it work inside our development tools? (IDEs, build tools, etc.)

Don’t worry about perfect testing environments yet. Your first goal is simply making testing easy and accessible—the rest will follow.

Step 2: Scale vertically—start small, but build reusable components

A common mistake in load testing is trying to do everything at once:

  • 100% coverage
  • Anonymized production data
  • A testing environment identical to production
  • Simulating massive traffic spikes from day 1;

These sound great on paper, but in reality: they are expensive, they take months to implement, and they may not even be necessary.

Instead, start small but smart:

  • Focus on key areas first: some parts of your app are more critical than others.
  • Accept partial coverage: sometimes limited tests give you 90% of the insights.

Prioritize real bottlenecks: fox example, recreating MFA login in a test suite can take weeks. Is that really where your performance bottleneck is?

Once you’ve secured an early ROI, focus on long-term success. The key? Reusability.

When teams can reuse components, load testing adoption skyrockets:

  • Developers onboard faster
  • Tests require less maintenance
  • Others will create reusable components as well and help you craft more and more complex tests

Load-test-as-code can help here. Storing tests in version control enables reusability, collaboration and scalability. At this stage, you’re close to full adoption—but not quite there yet. For that, you need the final step.

Step 3: Scale horizontally—make load testing everyone’s job

To spread adoption, you need a structured push to ensure all teams experience load testing at every level, for a limited time.

Here’s how you can kickstart company-wide adoption:

  • Tie your first load testing campaign to a business event to convince your top management to make it a priority: Black Friday, cloud migration, new product launch, etc.
  • Create internal SLAs for all your development teams: define clear ownership across teams.
  • Hold regular performance meetings: make people talk to each other throughout the whole process.
  • Share high-level reports → Help leadership understand the business impact of performance and think about long-term business requirements regarding performance.

Once you achieve this, you made it! Load testing is now everyone’s job. Years after years, your organization will fine-tune its performance strategy, with more and more stakeholders, more and more requirements, and more and more impacts!

How Gatling helps you scale horizontally

At Gatling, we’ve spent years refining strategies to help organizations expand adoption across all teams. Here are three key ways we tackle this challenge:

Speaking the developer’s language

If you want developers to adopt load testing, it has to feel natural.

That’s why Gatling evolved from Scala-only to a polyglot solution—supporting Java, Kotlin, JavaScript, and TypeScript.

Lowering the entry barrier with no-code

A no-code approach allows testers, product managers, and non-technical teams to create tests fast.

But no-code should never create silos—it should be a stepping stone to more advanced testing. This is why our no-code builder is also a code generator.

Bridging the gap between functional & load testing

Instead of reinventing the wheel, we asked: how can teams reuse functional tests for load testing?

That’s why we introduced Postman collections as load testing scenarios—allowing teams to repurpose existing functional API tests instantly.

Final thoughts: The key to load testing ROI is adoption

Load testing success isn’t about tools or methodology—it’s about adoption.

When you shift left, build reusable components, and make testing everyone’s job, you create a culture of performance—where load testing isn’t just a last-minute checkbox, but a strategic advantage.

Because once adoption happens, the ROI comes naturally.

Learn more about Gatling

Author

Paul-Henri Pillet

CEO & Co-founder of Gatling, the open-source load testing solution. Together with my business partner, Stéphane Landelle (creator of Gatling OSS), we built a business and tech duo to help organizations scale their applications—so they can scale their business. Today, Gatling supports 300,000 organizations running load tests daily across 100+ countries.



Gatling are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Performance Testing Tagged With: EuroSTAR Conference, software testing conference

Scaling Human Evaluation of AI-Infused Applications

April 22, 2025 by Aishling Warde

It seems like everyone in the quality engineering community is talking about AI. After all, advances generative AI have been rapidly transforming our tools, frameworks, platforms, processes, and ways of working. AI-assisted software lifecycle activities are becoming more widespread and generally accepted.

However, while everyone has been busy applying AI to testing, myself and the team at Test IO have been focused on harnessing the collective intelligence of humans to validate and verify several different types of AI systems. Our approach is currently being utilized to test some of the most sophisticated AI models, assistants, co-pilots and agents at scale.

This article shares our approach to testing what we refer to as AI-infused applications. It describes the grand challenge with testing these types of applications and then discusses the need for human evaluation. After outlining a number of practical techniques, it provides some lessons learned on how to effectively scale the human evaluation of AI/ML.

AI-Infused Applications (AIIA)

Just as with any application, there are many different ways that an AI-based system can be implemented. The particular method used generally depends on the problem being solved, desired capabilities, and any constraints on its development and operation.

Some common approaches to developing AI systems are:

  • Rules-Based AI. Encodes domain expertise into conditional (if-then-else) statements, heuristics, or expert systems.
  • Classical Machine Learning. Training a supervised or unsupervised learning model from scratch using structured datasets and algorithms like random forests, support vector machines, or gradient boosting.
  • Integrating a Pre-Trained Model. Leverages APIs from providers like OpenAI, Google, Anthropic, or Hugging Face to integrate an AI-powered component into an application with minimal development effort.
  • Retrieval Augmented Generation. Combines LLMs with a vector search database to fetch relevant information before generating a response.
  • Fine-Tuning a Pre-Trained Model. Uses transfer learning to adapt a pre-trained model to a specialized dataset to improve the performance of specific tasks.
  • Agent or Multi-Agent Based AI. Builds autonomous agents using an LLM backend, goal-setting mechanisms, and tools for action execution, e.g., APIs, databases, browser automation. These types of systems may include reinforcement learning (RL) or emergent behavior to allow multiple AI agents to interaction, collaborate, or compete in an environment.
  • AI-Orchestrated Workflows. Integrates multiple AI components using workflow orchestration tools such as LangChain, Haystack, or Airflow).

Each of these approaches have trade-offs in terms of accuracy, efficiency, cost, interpretability, among others. However, regardless of the approach used, as long as an application leverages AI or ML models or services as part of their logic, we consider it to be an AI-infused application.

Grand Testing Challenge

The rapid pace of growth in the AI space makes it particularly difficult to keep track of all of the different ways these types of systems can be implemented. However, irrespective of the development method, AI-infused applications present a grand challenge for software testing practitioners primarily due to their highly dynamic nature.

Dynamism

AI-infused applications exhibit different levels of dynamism depending on their purpose and capabilities. For example, there are dynamic aspects of predictive, adaptive, and generative AI systems which make them unpredictable, non-deterministic and, as a result, very difficult to test.

  • Predictive AI analyzes historical data to identify patterns and make forecasts about future events or outcomes. These types of systems evolve with data. In other words, the accuracy of predictions depends on continuously updated data and therefore, as new data arrives, retraining or fine-tuning the model helps improve its forecasts. Some predictive systems like stock trading algorithms process real-time data streams, modifying forecasts as conditions change.
  • Adaptive AI continuously learns from new experiences and environmental changes to modify its behavior and improve performance over time. Unlike traditional models, it evolves without requiring explicit reprogramming. For example, self-learning chatbots will personalize their responses over time. Systems like these are context aware and dynamically adjust based on real-world conditions. Adaptive AI can autonomously tweak its internal models and strategies to improve accuracy and efficiency over extended use.
  • Generative AI creates new content such as text, images, code, and music based on learned patterns from vast datasets. The same user prompts sent to a model can generate different responses. Generative models can refine outputs based on user feedback, style preferences, leading to evolving content quality. Model knowledge can be augmented with external sources via retrieval augmented generation, making the overall system highly flexible.

Adequately testing AI systems may involve a combination of pre- and post release testing, continuous monitoring, automated pipelines, adversarial testing, and human evaluation. These approaches help address quality challenges with AI systems including model drift, bias, fairness, uncertainty, output-variability, explainability, hallucinations, and more.

The Importance of Human Evaluation

While automated testing methods for AI systems help to monitor performance, human evaluation is essential to ensure AI aligns with real-world expectations. Here’s why human evaluation is critical and some techniques that can be applied in practice.

Why It Matters

In classical ML systems, automated accuracy metrics such as F1 scores don’t necessarily capture the real-world impact of predictions. Bias and fairness issues often require domain experts to identify potential harms and when it comes to explainability, although some tools provide insights, human judgement is generally needed to interpret them meaningfully. If an AI-infused application is going to interact directly with users, the system must be assessed for user experience and usability.

Adaptive AI systems can potentially start optimizing on the wrong objectives. For example, a common problem with recommendation systems is that they tend to reinforce their own biases. Here’s how:

  • Recommendation system suggests content based on past user behavior.
  • User engages more with that types of content (e.g., specific movie genres, political articles)
  • System interprets this as a strong preference, leading it to narrow future recommendations to similar content.
  • Over time, diversity in recommendations decreases and users are less likely to be exposed to alternative perspectives.

Lastly, AI-generated content is often ambiguous, misleading, or biased, requiring human judgement to assess quality. Automated checks like toxicity filters generally can’t fully capture nuances like sarcasm, cultural sensitivities, and ethical concerns.

Practical Techniques

  • User-Centric Testing. Real users provide feedback on how well AI adapts to changing needs and preferences.
  • Fact-Checking Panels. Subject matter experts verify AI-generated claims for accuracy and credibility.
  • Bias and Harm Assessment. Diverse human reviewers assess content for potential ethical issues and unintended harm.
  • Human Scoring and Annotation. Evaluators rate AI outputs on quality criteria such as coherence, creativity, appropriateness, practicality, among others.

Effectively Scaling Human Evaluation of AIIAs

Over the past 14 months, the team at Test IO has been diligently focused on human evaluation of AI-infused applications for a variety of large enterprise clients. The AI-infused applications under test range from independent chat and voice bots, to code and cloud assistants integrated into software development environments and cloud platforms. So how do you make human evaluation scalable, structured, and reliable? Here are some of the key lessons we’ve learned along the way.

Establish Clear Evaluation Criteria

This involves defining structured rubrics for human reviewers to ensure consistency. Figure 1 shows a sample deliverable including a cross-section of quality criteria.

Figure 1: Quality Criteria Example Showing the Results of Human Evaluation of AIIAs

Leverage Internal and External Communities

Diverse human expertise may be crowdsourced internally from your own pool of people, or externally via user testing communities. As shown in Figure 2, we capture a diverse set of perspectives by using the Test IO crowdsourcing platform to run test cycles using internal employees or external freelancers, or a combination of the two.

Figure 2: Access to Diverse Set of Human Evaluators including Internal Experts and External Freelancers

Combine Human and AI Judges

Automated tools or, for example, another LLM can be used for initial screening, followed by human reviewers for deeper analysis. Not only can this technique be applied to accelerate the evaluation activities, but it also facilitates comparing human versus automated evaluation. The confusion matrix in Figure 3 illustrates the correlation between human ground truths and labels generated by GPT4. Such an artifact can be used to indicate cases where the LLM assigns “irrelevant” to something that the human assigned as highly relevant, and vice-versa.

Figure 3: Confusion Matrix of Human Scores versus LLM Scores

Continuously Incorporate Human Feedback

Crowd-sourced human evaluation of AI-infused applications is applicable to several dimensions of testing. Test cycles can be exploratory, focusing on the early stages of app development or on new features. When issues are discovered, user feedback can be fed back into the model via approaches like reinforcement learning from human feedback. After the given model is updated or re-trained, test cycles can be executed as a form of regression using humans, automated tools, or a combination of both. Figure 4 provides a side-by-side comparison of these two general modes of conducting AIIA testing at scale using crowd sourcing.

Figure 4: Crowd-Sourced Exploratory Testing and Regression Testing of AIIA

Conclusion

For now, AI systems are too complex and dynamic to be tested solely through automation. Human evaluation is indispensable for detecting biases, verifying real-world applicability, and ensuring an ethical and engaging user experience. By integrating structured human oversight and deploying it using a scalable outcome-based model, we can look towards a future where AI systems are not only technically robust, but also aligned with societal values and user expectations.

Author

Tariq King, CEO and Head of Test IO

Tariq King is a recognized thought-leader in software testing, engineering, DevOps, and AI/ML. He is currently the CEO and Head of Test IO, an EPAM company. Tariq has over fifteen years’ professional experience in the software industry, and has formerly held positions including VP of Product-Services, Chief Scientist, Head of Quality, Quality Engineering Director, Software Engineering Manager, and Principal Architect. He holds Ph.D. and M.S. degrees in Computer Science from Florida International University, and a B.S. in Computer Science from Florida Tech. He has published over 40 research articles in peer-reviewed IEEE and ACM journals, conferences, and workshops, and has written book chapters and technical reports for Springer, O’Reilly, Capgemini, Sogeti, IGI Global, and more. Tariq has been an international keynote speaker and trainer at leading software conferences in industry and academia, and serves on multiple conference boards and program committees.

Outside of work, Tariq is an electric car enthusiast who enjoys playing video games and traveling the world with his wife and kids.



EPAM are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Software Testing Tagged With: software testing conference

Taking Your First Steps with GenAI in Quality Engineering

April 21, 2025 by Aishling Warde

Generative AI (GenAI) is increasingly being recognized for its potential to enhance quality engineering by generating content from existing information to achieve known outcomes. However, determining where to begin can be challenging.

Key Activities for GenAI Implementation

  • Reviewing Requirements/User Stories – Use GenAI to analyze and refine requirements, ensuring clarity and completeness.
  • Generating Test Cases/Scenarios – GenAI can quickly generate diverse scenarios, reducing the manual effort involved.
  • Generating Test Scripts – Generate test scripts, or agile session sheets, that can be used for manual testing.
  • Generating Automation Code – Focus on small functions rather than entire frameworks to incrementally enhance your automation suite.
  • Generating Bug Reports – GenAI can help standardize bug reports, making them more useful for developers and save you time during the execution process.

Choosing the Right Starting Point

When choosing where to start, identify the activity that causes the most pain or time loss and poses the least risk if the outcome isn’t perfect. This strategic choice will free up time to invest in other areas of the lifecycle.

Start Small and Scale Gradually

Remember, the key to successful GenAI implementation is to start small. Master one activity before scaling to others. By doing so, you can gradually build confidence and expertise, ultimately enhancing your quality engineering processes with GenAI.

Which GenAI tool to use?

There are many free or low cost GenAI tools that are available today. Here are some tools you can consider:

  • ChatGPT
  • Gemini
  • Claude
  • Microsoft Copilot

In most cases you can use the free versions of these tools, but be aware that there may be limits on transaction rates, and your data may be used for training. Speaking of which…

Security & Privacy

Data security & privacy is a critical concern for GenAI usage. Some vendors offer paid versions that provide greater data security and privacy features.

Evaluate the data privacy policies associated services to ensure they align with your organizational requirements.

NEVER Put Sensitive or PII Information Into GenAI Tools!

Quick Start guide to Prompt Engineering

So you’ve identified the activity you want to automate, you’ve selected the GenAI tool of preference…Now to write your first prompt!

Use a robust prompt engineering pattern like R.I.S.E. (Role, Input, Steps, Expected Output) to help guide GenAI to produce the desired results. The following is an example prompt for creating high-level test scenarios:

# Role: You are a software tester
 
# Input: I will provide a requirement
 
# Steps: I want you to generate high-level test scenarios. You should ensure that you generate positive and negative test cases, using equivalence partitioning and boundary analysis.
 
# Expected Output: The response should be in a table with the following headings: 
- TS-ID: A unique test scenario identifier starting with "TS-" 
- Description: A description of the test scenario 
- Type: A value of "Positive" or "Negative" that indicates if the test case is a positive or negative scenario. 
- Expected Outcome: The expected outcome from the test scenario

R.I.S.E is just one pattern, so try experimenting with different ones to see what works best for your input & output.

Experiment and Have Fun!

Author

Jarrod Plant

Jarrod Plant is a seasoned professional with 20 years of experience in software testing consulting, providing him with a diverse range of skills and knowledge in various industries, tools, and corporate cultures. He has a technical background, with experience in both Automation & Performance testing, that is balanced by his experience in customer solutions driving a value-focused delivery.

Driven by his passion for the potential of Artificial Intelligence, Jarrod currently serves as the product owner of Planit’s Quality Engineering-centric Generative AI platform. This role provided him with invaluable firsthand experience in building, managing, and testing a generative AI platform. He believes that we are at the precipice of an evolution in software delivery and testing, brought on by the power and accessibility of AI.

Planit are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Test Automation Tagged With: 2025, EuroSTAR Conference

The Role of “Digital Tester” in Quality Engineering

April 16, 2025 by Aishling Warde

In today’s world, there is a profound transformation happening in how we compete, create and capture value. With the speed at which technologies like Generative AI and Agentic AI are adopted widely, the whole relationship between humans and machines is getting redefined.

Quality engineering is no exception to this. Adoption of AI into the testing tools, Development and Testing of AI applications and development is on the rise with so many new tools and strategies emerging daily.

Enterprises are competing in the way they are running their entire quality engineering operation. Your competitor is running their QE operations with a team of one-third the size of yours without compromising the scale and accuracy. In fact, they are growing twice as fast as yours. How?

While most enterprises are still deploying likes of ChatGPT for generating content and creating chatbots, very few are fundamentally reimagining the quality engineering with AI. They are deploying the “Digital Tester”. These digital testers are nothing, but the digital teammate or digital colleague implemented using Agentic AI.

While these digital colleagues can drive quality engineering at incredible speed and scale, they also have their own unique characteristics and limitations. Understanding these characteristics not only guides us in terms of what tasks to delegate to these agents but also build a strong relationship that maximize the potential of both human and machines.

While these digital testers are evolving themselves from simple automation tools to the complex autonomous agents, it is very important to select the right digital tester depending on the various factors e.g. your use cases, technical complexity, implementation costs etc. It is very similar to onboarding a new team member and integrating him into your existing team.

Although it looks like a very simple choice, i.e. selecting the complex autonomous digital tester to reduce manual dependency and improve speed, it is a wise decision to opt for a digital tester which supports the entire continuum from automation to autonomy. This will allow flexibility so that we can use the different digital tester skills below as per the testing needs.

  • Predictable and consistent behavior of digital testers with pre-defined rules; No learning and adaptation
  • Digital tester leveraging LLMs for constraint awareness, but behavior is validated against predefined rules
  • Digital tester with reasoning and action; Mult-step workflows are broken down into smaller actionable paths
  • Digital tester’s Reasoning and action combined with RAG for external knowledge sources
  • Integrate Digital tester with multiple tools for leveraging APIs and other software
  • Self-reflecting /analyzing Digital tester using feedback loops
  • Digital tester recalls relevant past experiences, preferences & uses this context for Reasoning
  • Digital testers actively manipulate and control digital/physical environments in real time
  • Digital testers improve themselves over time, learning from interactions, adapting to new environments, evolving

In its simplest terms, the testing needs of any enterprise can be broadly categorized into “What”, “How” and “When” of a software feature and Digital testers with the above skills can help us in all these aspects. AI assisted testing for “What” part of the testing e.g. AI pattern recognition helping the testers to know which parts of the application are likely problematic based on the analysis of the past test cases and historical data. AI powered testing for “How” part of the testing e.g. Self-Healing with AI ensuring the test cases remain valid when changes occur without manual intervention. And AI agents for testing for “When” part of the testing e.g. Self-learning AI with ability to spot unusual behaviour in the application by learning from each test cases they execute or independently exploring the application to discover unexpected issues.

Another most important consideration while selecting the Digital tester is deployment of the digital tester to test AI/ML systems. As AI and ML become more prevalent in our lives, it’s crucial to ensure these systems are thoroughly tested to work as intended.
While selecting your digital tester, make sure that the digital tester can overcome the challenges like being non-Deterministic, Lack of adequate and accurate training data, testing for bias, Interpretability and Sustained Testing and supports the critical aspects of AI systems testing like data curation & validation, algorithm testing, performance and security Testing and regulatory compliance e.g. compliance towards the country’s AI act.

Summing Up

The rise of AI and Generative AI marks one of the most transformative shifts in our lives. Over the past decade, advancements in machine learning, deep learning and neural networks have driven artificial intelligence theoretical concepts into real worlds applications. This evolution has revolutionized quality engineering, where AI became an integral part of the traditional testing platforms and tools. These platforms no longer remained only tools, but they have evolved into a complete Digital Tester which can be part of your Quality Engineering team and work in collaboration with the humans to deliver an exceptional result. As businesses increasingly use AI to construct systems and applications, these Digital Testers are now in turn made to test the AI applications.

The AI testing approaches, procedures and platforms will continue to evolve and improve over the next few years, eventually approaching the maturity and standardization of Digital Testers in the quality engineering landscape.

Authors

Keval Hutheesing, Chief Executive Officer, Cygnet.One

Keval Hutheesing, Chief Executive Officer of Cygnet.One, spearheads the organization’s strategic evolution toward scalable, high-performance technology solutions with quality engineering at its core. His visionary leadership integrates quality throughout the development lifecycle—driving automation, compliance, and operational excellence.
Keval positions quality engineering as the strategic foundation that accelerates business outcomes, ensuring consistent delivery, proactive risk mitigation, and exceptional customer experiences. Through his implementation of a comprehensive quality framework, he propels Cygnet.One’s transformation into a sophisticated platform-driven ecosystem where excellence is intrinsically woven into every aspect of operations.



Shivangi Dubey – AVP & Head of Quality Engineering, Cygnet One

With a rich background steeped in over 15 years of expertise in Quality Engineering and Product Management, Shivangi is a seasoned leader in driving transformative journeys and cost optimization through innovative approaches. She excels in securing new business, executing successful Testing Automation projects, and implementing comprehensive testing strategies.
Renowned for her problem-solving prowess and visionary leadership, she collaborates with customers to expand testing footprints and drive innovation. Experienced in strategic consulting, business development, and process standardization, achieving excellence is her way of life. As the Head of Quality Engineering at Cygnet.One, she brings a stellar track record and an unwavering commitment to excellence.

Cygnet One are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.



Filed Under: Test Automation Tagged With: EuroSTAR Conference, Test Automation

Unlock the Fun with Passport Around the EXPO at EuroSTAR Conference!

April 15, 2025 by Aishling Warde

At the EuroSTAR Conference EXPO, we’re all about creating engaging, interactive experiences for our delegates, it’s a fun and rewarding challenge for attendees, and a fantastic way for exhibitors to connect with more visitors.

What is Passport Around the EXPO?

The Passport Around the EXPO is an exciting delegate challenge designed to get attendees actively exploring the EXPO floor. Every delegate will receive a ‘passport’ in their conference bag displaying each opted-in exhibitor logo. and must visit partner stands to have their ‘passport’ card stamped.

This initiative serves as a great icebreaker and provides an incentive for delegates to stop by and engage with your booth, while also offering a fun and memorable experience at the conference. It’s a simple, effective way to increase foot traffic to your stand and raise awareness about your company, product, or service.

Why Gamification Works at Booths

Gamification — using game-like elements in non-game contexts — has proven to be an effective strategy in boosting booth engagement. Here are some key statistics showing why this approach works:

  • 85% of attendees are more likely to remember a brand that incorporates a gamified experience at an event.
  • 78% of exhibitors report that gamification increases foot traffic to their booths.
  • 70% of booth visitors are more engaged when interactive activities, like games or challenges, are involved.

Gamified experiences can lead to increased booth engagement by 30% or more, compared to traditional static displays

By participating in the Passport Around the EXPO, your booth becomes part of an engaging experience, increasing the likelihood of attendees stopping by, interacting with your team, and learning about your offerings.

How Does It Work?

Step 1: Participants receive a passport card upon arrival in their EuroSTAR Conference Swag Bag.

Step 2: Delegates visit partner stands throughout the EXPO and get their passport stamped.

Step 3: The challenge is completed once delegates collect all stamps.

The more stamps delegates collect, the more they’ll immerse themselves in the excitement and fun of the conference—and increase their chances of winning the very first ticket to next year’s EuroSTAR Conference!

Why Should You Participate?

Passport Around the EXPO isn’t just a way to engage delegates; it’s also a fantastic opportunity for your booth to stand out. By participating, you’ll be:

  • Maximise Brand Exposure: Participating in the Passport Around the Expo puts your logo directly into the hands of every conference attendee. It’s a high-impact way to boost brand visibility and ensure your company is top-of-mind as delegates navigate the EXPO Hall
  • Building Connections: It offers a great reason for delegates to stop and chat with you, providing an opening for meaningful conversations and relationship-building.
  • Fun & Interactive: It makes your booth more interactive, turning a simple visit into an engaging experience that delegates will remember.

How Can You Get Involved?

The best part? Participation is completely free and optional. All you need to do is sign up, and we’ll take care of the rest — including providing the Passport cards and stamps. There’s no need for you to supply a prize; EuroSTAR has that covered. All you need to do is be ready to stamp passports and connect with attendees who stop by your stand!

This initiative is a fantastic way to bring a bit of excitement and fun to your EuroSTAR experience, while also boosting your visibility and creating new connections.

We Can’t Wait to See You!

So, if you’re exhibiting at the EuroSTAR Conference, don’t miss out on the Passport Around the EXPO initiative! It’s a fun and easy way to engage with delegates and make the most of your time at one of the largest and most prestigious testing conferences in Europe.

We look forward to seeing you at the conference!

Author

Clare Burke

EXPO Team, EuroSTAR Conferences


With years of experience and a passion for all things EuroSTAR, Clare has been a driving force behind the success of our EXPO. She’s the wizard behind the EXPO scenes, connecting with exhibitors, soaking up the latest trends, and forging relationships that make the EuroSTAR EXPO a vibrant hub of knowledge and innovation.


t: +353 91 416 001
e: clare@eurostarconferences.com

Filed Under: EuroSTAR Expo, Uncategorized Tagged With: 2025, EuroSTAR Conference

Agentic testing for the enterprise: Ushering in a new era of software testing

April 14, 2025 by Aishling Warde

In today’s fast-paced world of software development, the need for effective testing has never been more important. Conventional approaches to testing are often challenged by rapid release cycles and complicated integrations, leading to slow delivery, high costs, and low software quality. Agentic testing is an innovative way to address these challenges. This transformative approach to software testing empowers testers with advanced AI capabilities, allowing them to automate much broader, more non-deterministic, and non-linear efforts in testing. AI agents take on the tedious and time-consuming tasks, providing a level of productivity that is not possible with conventional testing methods.

So how can you get started with agentic testing? We’re bringing agentic testing to life with the launch of UiPath Test Cloud, now generally available. UiPath Test Cloud—the next evolution of UiPath Test Suite—equips software testing teams with a fully featured testing solution that accelerates and streamlines testing for over 190 applications and technologies, including SAP®, Salesforce, ServiceNow, Workday, Oracle, and EPIC. Let’s take a closer look.

Comprehensive testing capabilities for the enterprise

Test Cloud is an environment where software testers feel at home. It’s your solution for bringing agentic testing to life—augmenting you with AI agents across the entire testing lifecycle. Zooming in, Test Cloud is a fully featured platform designed to serve all your testing needs. Whether it’s functional or performance testing, Test Cloud empowers you with open, flexible, and responsible AI across every stage, from test design and test automation, to test execution and test management. And it’s built for scale—with everything you need to handle the largest and most complex testing projects. It helps you design smarter tests with capabilities like change impact analysis and test gap analysis, ensuring a risk-based, data-driven approach to testing. It gives you the flexibility to automate tests the way you want, whether it’s low-code or coded user interface (UI) and API automation, across platforms. Plus, with continuous integration and continuous delivery (CI/CD) integrations and distributed test execution, Test Cloud seamlessly fits into your ecosystem while accelerating your testing to keep up with rapid development cycles. And when it comes to test management, Test Cloud has you covered with 50+ application lifecycle management (ALM) integrations, as well as a rich set of test data management and reporting capabilities.

Unlock built-in and customizable AI for the entire testing lifecycle with UiPath Autopilot™ for Testers

What makes agentic testing truly agentic? AI agents. With UiPath Autopilot for Testers, our first-party AI agent available in Test Cloud, you’re equipped with built-in, customizable AI that accelerates every phase of the testing lifecycle.

Leverage Autopilot to enhance the test design phase through capabilities such as:

  • Quality-checking requirements
  • Generating tests for requirements
  • Generating tests for SAP transactions
  • Identifying tests requiring updates
  • Detecting obsolete tests

Then, use Autopilot to take your test automation to the next level through capabilities such as:

  • Generating low-code test automation
  • Generating coded user interface (UI) and API automation
  • Generating synthetic test data
  • Performing fuzzy verifications
  • Generating expressions
  • Refactoring coded test automation
  • Fixing validation errors in test automation
  • Self-healing test automation

And enhance test management with Autopilot capabilities such as:

  • Generating test insights reports
  • Importing manual test cases
  • Searching projects in natural language

Any type of tester—from a developer tester to a technical tester to a business tester—can use Autopilot to build resilient automations more quickly, unlock new use cases, and improve accuracy and time to value. Organizations are already yielding tangible benefits from this versatility and efficiency, as showcased by Cisco’s experience with Autopilot in accelerating their testing processes.

“At Cisco, our GenAI testing roadmap centers on leveraging UiPath LLM capabilities throughout the entire testing lifecycle, from creating user stories to generating test cases to reporting, while ensuring seamless integration with code repositories,” said Rajesh Gopinath, Senior Leader, Software Engineering at Cisco. “With the power of Autopilot, we’re equipped to eliminate manual testing by 50%, reduce the tools used in our testing framework, and reduce dependency on production data for testing.”

Build your own AI agents tailored specifically to your unique testing needs with Agent Builder

Now, let’s meet the toolkit for building AI agents tailored to your testing needs: UiPath Agent Builder. Leverage a prebuilt agent from the Agent Catalog, or build your own agent using the following components:

  • Prompts: define natural language prompts with goals, roles, variables, and constraints
  • Context: use active and long-term memory to inform the plan with context grounding
  • Tools: define UI/API automations and/or other agents that are invoked based on a prompt
  • Escalations: asks people for guidance with UiPath Action Center or UiPath Apps
  • Evaluations: ensure the agent meets your desired objectives and behaves reliably in various scenarios

Looking for inspiration to jumpstart your first attempt at building an agent? Here are some recommendations for agents that you can build to help accelerate your testing:

  • Data Retriever: helps find test data for exploratory testing sessions in databases
  • Bug Consolidator: identifies distinct bugs behind failed test cases after nightly test runs
  • Compliance Checker: finds test cases that do not adhere to best practice
  • Stability Inspector: identifies flaky tests, repeatedly failed tests, and false positives

These are just a few agents that augment your expertise throughout the testing lifecycle. Join the Agent Builder waitlist to be the first in line to try your hand at building one.

Open, flexible, and responsible

Beyond AI agents, what does Test Cloud offer that helps you engage in agentic testing?

With UiPath Test Cloud, you can harness the power of an open and flexible architecture that seamlessly integrates with your existing tools, including connections with your CI/CD pipelines, ALM tools, and version control systems, as well as webhooks that keep you informed in real time. This flexibility ensures that Test Cloud adapts to your unique enterprise needs.

When it comes to responsible AI, you benefit from the UiPath AI Trust Layer, which provides you with explainable AI, bias detection, and robust privacy protection. You can confidently meet regulatory requirements and internal governance standards thanks to comprehensive auditability features. By embracing the open architecture and responsible AI capabilities of Test Cloud, you’re not just streamlining your testing process–you’re future-proofing your software quality with intelligent, efficient, and trustworthy technology that grows with your team’s needs.

Resilient end-to-end automation

With UiPath Test Cloud, you can unlock the power of resilient end-to-end automation that will enhance your testing processes. Experience seamless automation capabilities for any UI or API, giving you unparalleled flexibility in your testing approach. Whether you’re working with home-grown web and mobile applications or complex enterprise systems like SAP, Oracle, Workday, Salesforce, and ServiceNow, you can engage in automated testing that covers all aspects of your software ecosystem. By leveraging powerful end-to-end automation, you’ll not only improve the efficiency of your testing processes but also gain greater confidence in the quality and reliability of your software releases. Customers like Dart Container, Quidelortho, Orange Spain, and Cushman and Wakefield have achieved 90% automation rates, 30-40% cost reduction, 6X faster release speeds, and other significant benefits through using UiPath automated testing capabilities.

Production-grade architecture and governance

You and your team may face the challenge of maintaining a secure, scalable, and compliant testing infrastructure that can keep up with your agile development processes. With Test Cloud, you’re equipped with a production-grade architecture and robust governance features that will transform your agentic testing experience.

Benefit from Veracode certification, ensuring your testing environment meets the highest security standards and giving you peace of mind. Comprehensive auditing capabilities provides you with detailed insights into all testing activities, enabling you to maintain full transparency and easily demonstrate compliance. You also have granular role management features, allowing you to precisely control access and permissions, ensuring that the right people have the right level of access at all times. With centralized credential management, you can streamline security processes and reduce the risk of unauthorized access, making it easier than ever to manage and protect sensitive testing data.

Powered by the UiPath Platform

When you choose UiPath Test Cloud, you’re not just getting a standalone testing solution–you’re tapping into the power of the entire UiPath Platform™. This opens up a world of possibilities for streamlining your testing processes and boosting your overall automation efforts. You’ll benefit from shared and reusable components across teams, allowing you to leverage expertise and reduce duplication of effort. EDF Renewables, for example, achieved 75% component reuse by leveraging testing capabilities within the UiPath Platform. With access to the UiPath Marketplace, you’ll have a wealth of prebuilt solutions at your fingertips, accelerating your testing initiatives. Access to snippets and libraries empowers you to create modular, reusable code that can be easily shared and maintained across your organization. Plus, you can leverage centralized object repositories, which simplify test maintenance and improve consistency across your automation projects. Additionally, the robust asset management capabilities ensure that you can efficiently organize, version, and deploy your automation assets enterprise-wide, maximizing the value of your organization’s investment in the UiPath Platform™.

The benefits of agentic testing with UiPath Test Cloud

No matter your role or ranking at your organization, you can start reaping the benefits of Test Cloud for agentic testing right away. As a CIO, you’ll experience increased efficiency, reduced costs, and better resource utilization, ultimately leading to faster time-to-market and enterprise-wide automation. Testing team leads will benefit from improved consistency and reliability, increased productivity, and better defect detection, while standardizing testing processes and achieving unprecedented scalability. For testers, Test Cloud offers increased accuracy and efficiency, enhanced test coverage, and faster feedback loops, resulting in higher job satisfaction. The tangible benefits are clear: based on an in-depth study conducted by IDC, customers using UiPath for testing have achieved $4M average annual savings per customer, 529% three-year return on investment, and 6 months payback on investment.

With agentic testing powered by Test Cloud, all roles will enjoy accelerated test cycles, deeper test coverage, and reduced risk, all while realizing significant cost savings and resource optimization. This comprehensive and adaptive testing approach will empower your organization to deliver high-quality software faster than ever before, accelerating your time to value and giving you a competitive edge in today’s fast-paced software landscape. This vision of AI-augmented testing is not just theoretical; forward-thinking organizations like State Street are already anticipating how it will transform their testing processes.

The future of agentic testing

Test Cloud isn’t just built for the testing you know today—it’s built for where testing is going. With Test Cloud, you’re not just keeping up with increasing testing demands—you’re staying ahead. Get started with UiPath Test Cloud by signing up for the trial today.

Author

Sophie Gustafson

Product Marketing Manager, Test Cloud, UiPath

Filed Under: EuroSTAR Conference Tagged With: 2025, EuroSTAR2025

  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us