• Skip to main content
Year End Offer: Save 30% + Teams Save Up to an Extra 25%!

EuroSTAR Conference

Europe's Largest Quality Engineering Conference

  • Programme
    • Programme Committee
    • 2025 Programme
    • 2025 Speakers
    • Community Hub
    • Awards
    • Social Events
    • Volunteer
  • Attend
    • Location
    • Highlights
    • Get Approval
    • Why Attend
    • Bring your Team
    • Testimonials
    • 2025 Photos
  • Sponsor
    • Sponsor Opportunities
    • Sponsor Testimonials
  • About
    • About Us
    • Our Timeline
    • FAQ
    • Blog
    • Organisations
    • Contact Us
  • Book Now

EuroSTAR Conference

Agentic testing for the enterprise: Ushering in a new era of software testing

April 14, 2025 by Aishling Warde

In today’s fast-paced world of software development, the need for effective testing has never been more important. Conventional approaches to testing are often challenged by rapid release cycles and complicated integrations, leading to slow delivery, high costs, and low software quality. Agentic testing is an innovative way to address these challenges. This transformative approach to software testing empowers testers with advanced AI capabilities, allowing them to automate much broader, more non-deterministic, and non-linear efforts in testing. AI agents take on the tedious and time-consuming tasks, providing a level of productivity that is not possible with conventional testing methods.

So how can you get started with agentic testing? We’re bringing agentic testing to life with the launch of UiPath Test Cloud, now generally available. UiPath Test Cloud—the next evolution of UiPath Test Suite—equips software testing teams with a fully featured testing solution that accelerates and streamlines testing for over 190 applications and technologies, including SAP®, Salesforce, ServiceNow, Workday, Oracle, and EPIC. Let’s take a closer look.

Comprehensive testing capabilities for the enterprise

Test Cloud is an environment where software testers feel at home. It’s your solution for bringing agentic testing to life—augmenting you with AI agents across the entire testing lifecycle. Zooming in, Test Cloud is a fully featured platform designed to serve all your testing needs. Whether it’s functional or performance testing, Test Cloud empowers you with open, flexible, and responsible AI across every stage, from test design and test automation, to test execution and test management. And it’s built for scale—with everything you need to handle the largest and most complex testing projects. It helps you design smarter tests with capabilities like change impact analysis and test gap analysis, ensuring a risk-based, data-driven approach to testing. It gives you the flexibility to automate tests the way you want, whether it’s low-code or coded user interface (UI) and API automation, across platforms. Plus, with continuous integration and continuous delivery (CI/CD) integrations and distributed test execution, Test Cloud seamlessly fits into your ecosystem while accelerating your testing to keep up with rapid development cycles. And when it comes to test management, Test Cloud has you covered with 50+ application lifecycle management (ALM) integrations, as well as a rich set of test data management and reporting capabilities.

Unlock built-in and customizable AI for the entire testing lifecycle with UiPath Autopilot™ for Testers

What makes agentic testing truly agentic? AI agents. With UiPath Autopilot for Testers, our first-party AI agent available in Test Cloud, you’re equipped with built-in, customizable AI that accelerates every phase of the testing lifecycle.

Leverage Autopilot to enhance the test design phase through capabilities such as:

  • Quality-checking requirements
  • Generating tests for requirements
  • Generating tests for SAP transactions
  • Identifying tests requiring updates
  • Detecting obsolete tests

Then, use Autopilot to take your test automation to the next level through capabilities such as:

  • Generating low-code test automation
  • Generating coded user interface (UI) and API automation
  • Generating synthetic test data
  • Performing fuzzy verifications
  • Generating expressions
  • Refactoring coded test automation
  • Fixing validation errors in test automation
  • Self-healing test automation

And enhance test management with Autopilot capabilities such as:

  • Generating test insights reports
  • Importing manual test cases
  • Searching projects in natural language

Any type of tester—from a developer tester to a technical tester to a business tester—can use Autopilot to build resilient automations more quickly, unlock new use cases, and improve accuracy and time to value. Organizations are already yielding tangible benefits from this versatility and efficiency, as showcased by Cisco’s experience with Autopilot in accelerating their testing processes.

“At Cisco, our GenAI testing roadmap centers on leveraging UiPath LLM capabilities throughout the entire testing lifecycle, from creating user stories to generating test cases to reporting, while ensuring seamless integration with code repositories,” said Rajesh Gopinath, Senior Leader, Software Engineering at Cisco. “With the power of Autopilot, we’re equipped to eliminate manual testing by 50%, reduce the tools used in our testing framework, and reduce dependency on production data for testing.”

Build your own AI agents tailored specifically to your unique testing needs with Agent Builder

Now, let’s meet the toolkit for building AI agents tailored to your testing needs: UiPath Agent Builder. Leverage a prebuilt agent from the Agent Catalog, or build your own agent using the following components:

  • Prompts: define natural language prompts with goals, roles, variables, and constraints
  • Context: use active and long-term memory to inform the plan with context grounding
  • Tools: define UI/API automations and/or other agents that are invoked based on a prompt
  • Escalations: asks people for guidance with UiPath Action Center or UiPath Apps
  • Evaluations: ensure the agent meets your desired objectives and behaves reliably in various scenarios

Looking for inspiration to jumpstart your first attempt at building an agent? Here are some recommendations for agents that you can build to help accelerate your testing:

  • Data Retriever: helps find test data for exploratory testing sessions in databases
  • Bug Consolidator: identifies distinct bugs behind failed test cases after nightly test runs
  • Compliance Checker: finds test cases that do not adhere to best practice
  • Stability Inspector: identifies flaky tests, repeatedly failed tests, and false positives

These are just a few agents that augment your expertise throughout the testing lifecycle. Join the Agent Builder waitlist to be the first in line to try your hand at building one.

Open, flexible, and responsible

Beyond AI agents, what does Test Cloud offer that helps you engage in agentic testing?

With UiPath Test Cloud, you can harness the power of an open and flexible architecture that seamlessly integrates with your existing tools, including connections with your CI/CD pipelines, ALM tools, and version control systems, as well as webhooks that keep you informed in real time. This flexibility ensures that Test Cloud adapts to your unique enterprise needs.

When it comes to responsible AI, you benefit from the UiPath AI Trust Layer, which provides you with explainable AI, bias detection, and robust privacy protection. You can confidently meet regulatory requirements and internal governance standards thanks to comprehensive auditability features. By embracing the open architecture and responsible AI capabilities of Test Cloud, you’re not just streamlining your testing process–you’re future-proofing your software quality with intelligent, efficient, and trustworthy technology that grows with your team’s needs.

Resilient end-to-end automation

With UiPath Test Cloud, you can unlock the power of resilient end-to-end automation that will enhance your testing processes. Experience seamless automation capabilities for any UI or API, giving you unparalleled flexibility in your testing approach. Whether you’re working with home-grown web and mobile applications or complex enterprise systems like SAP, Oracle, Workday, Salesforce, and ServiceNow, you can engage in automated testing that covers all aspects of your software ecosystem. By leveraging powerful end-to-end automation, you’ll not only improve the efficiency of your testing processes but also gain greater confidence in the quality and reliability of your software releases. Customers like Dart Container, Quidelortho, Orange Spain, and Cushman and Wakefield have achieved 90% automation rates, 30-40% cost reduction, 6X faster release speeds, and other significant benefits through using UiPath automated testing capabilities.

Production-grade architecture and governance

You and your team may face the challenge of maintaining a secure, scalable, and compliant testing infrastructure that can keep up with your agile development processes. With Test Cloud, you’re equipped with a production-grade architecture and robust governance features that will transform your agentic testing experience.

Benefit from Veracode certification, ensuring your testing environment meets the highest security standards and giving you peace of mind. Comprehensive auditing capabilities provides you with detailed insights into all testing activities, enabling you to maintain full transparency and easily demonstrate compliance. You also have granular role management features, allowing you to precisely control access and permissions, ensuring that the right people have the right level of access at all times. With centralized credential management, you can streamline security processes and reduce the risk of unauthorized access, making it easier than ever to manage and protect sensitive testing data.

Powered by the UiPath Platform

When you choose UiPath Test Cloud, you’re not just getting a standalone testing solution–you’re tapping into the power of the entire UiPath Platform™. This opens up a world of possibilities for streamlining your testing processes and boosting your overall automation efforts. You’ll benefit from shared and reusable components across teams, allowing you to leverage expertise and reduce duplication of effort. EDF Renewables, for example, achieved 75% component reuse by leveraging testing capabilities within the UiPath Platform. With access to the UiPath Marketplace, you’ll have a wealth of prebuilt solutions at your fingertips, accelerating your testing initiatives. Access to snippets and libraries empowers you to create modular, reusable code that can be easily shared and maintained across your organization. Plus, you can leverage centralized object repositories, which simplify test maintenance and improve consistency across your automation projects. Additionally, the robust asset management capabilities ensure that you can efficiently organize, version, and deploy your automation assets enterprise-wide, maximizing the value of your organization’s investment in the UiPath Platform™.

The benefits of agentic testing with UiPath Test Cloud

No matter your role or ranking at your organization, you can start reaping the benefits of Test Cloud for agentic testing right away. As a CIO, you’ll experience increased efficiency, reduced costs, and better resource utilization, ultimately leading to faster time-to-market and enterprise-wide automation. Testing team leads will benefit from improved consistency and reliability, increased productivity, and better defect detection, while standardizing testing processes and achieving unprecedented scalability. For testers, Test Cloud offers increased accuracy and efficiency, enhanced test coverage, and faster feedback loops, resulting in higher job satisfaction. The tangible benefits are clear: based on an in-depth study conducted by IDC, customers using UiPath for testing have achieved $4M average annual savings per customer, 529% three-year return on investment, and 6 months payback on investment.

With agentic testing powered by Test Cloud, all roles will enjoy accelerated test cycles, deeper test coverage, and reduced risk, all while realizing significant cost savings and resource optimization. This comprehensive and adaptive testing approach will empower your organization to deliver high-quality software faster than ever before, accelerating your time to value and giving you a competitive edge in today’s fast-paced software landscape. This vision of AI-augmented testing is not just theoretical; forward-thinking organizations like State Street are already anticipating how it will transform their testing processes.

The future of agentic testing

Test Cloud isn’t just built for the testing you know today—it’s built for where testing is going. With Test Cloud, you’re not just keeping up with increasing testing demands—you’re staying ahead. Get started with UiPath Test Cloud by signing up for the trial today.

Author

Sophie Gustafson

Product Marketing Manager, Test Cloud, UiPath

Filed Under: EuroSTAR Conference Tagged With: 2025, EuroSTAR2025

Real-World Data vs. Fake Data: Choosing the right strategy for effective testing

April 1, 2025 by Aishling Warde

Testing environments play a critical role in software development, ensuring applications function correctly before release. To achieve this, having test data that simulates real-world scenarios is essential. However, the choice between “fake data” and “real-world data” sparks an interesting debate, as each approach offers significant challenges.

In this article, we will explore the key differences between these two types of data, analyze their benefits and challenges, and ultimately highlight how a strategic combination of both can optimize the testing process, ensuring accuracy, security, and efficiency in development environments.

What is real-world data?

Anonymized real-world data is derived from production environments, ensuring it does not contain personally identifiable information while complying with regulations such as GDPR, CCPA, LPDP, and others.

These datasets offer a high degree of realism, as they preserve referential integrity, maintain the natural complexity of real-world scenarios, and accurately reflect user behavior, system interactions, and business logic. Additionally, real-world data naturally exhibits aging, reflecting how information changes over time and capturing historical trends and patterns that influence system behavior.

By leveraging real-world data, organizations can test applications under conditions that closely resemble actual usage, improving the reliability and effectiveness of their testing processes.

What benefits do real-world data offer?

Using real-world data provides significant advantages for your organization:

  • Captures the complexity of real-world behavior, including intricate patterns, sudden fluctuations, and inherent biases while ensuring data privacy.
  • Maintains appropriate statistical distribution and frequency.
  • Preserves relationships and interdependencies between elements, allowing comprehensive “end-to-end” testing.
  • Reduces the gap between development, testing, and production environments.
  • Facilitates integration testing with other systems under production-like conditions.
  • Provides immediate availability and reusability.

Challenges of using real-world data

Working with anonymized real-world data presents challenges. Identifying the right data for each test case, anonymizing it effectively, and delivering it on-demand to the testing environment are key challenges, especially in complex and costly environments with large volumes of data. Managing real-world data requires robust tools to ensure that no sensitive information is exposed and that masking processes remain effective, as well as addressing other critical challenges in test data management.

Synthetic data

The term “fake data” or “synthetic data” is widely used across industries but lacks a universally accepted definition. Different sectors and vendors interpret this concept in various ways depending on their testing needs and available technologies. While some consider synthetic data as manually created datasets, others define it as AI-generated data, or even simply masked real data. As these variations can create confusion, understanding the most common approaches provides greater clarity about what synthetic data really means.

Some of the most common definitions include:

  • Traditionally created data: Data manually or traditionally generated using tools like spreadsheets, scripts or bussines apis. While quick to produce, it often lacks complexity, is prone to errors, and becomes costly over time.
  • AI-Generated data: Data created by AI models trained on real-world patterns. Although it can mimic realistic behaviors, its reliability remains limited for mission-critical applications. For the time being, there is no evidence of successfully using this approach for testing business support systems.

Synthetic data limitations

These approaches to synthetic data generation often fall short when it comes to accurately simulating production environments, facing critical limitations challenges such as:

  • Lack of aging: No representation of time-based changes.
  • Limited complexity: Misses intricate, real-world dependencies.
  • Absence of rare scenarios: Struggles to simulate edge cases.
  • No technical debt: Fails to reflect legacy patterns and old system quirks.
  • Unrealistic data: Lacks inconsistencies found in production.
  • Reduced data richness: Missing the diversity of real-world interactions.
  • Insufficient volume: Smaller datasets than real production environments.
  • Inaccurate data distribution: Does not replicate real-world patterns.

These gaps make these synthetic data approaches unreliable for testing environments that aim to mimic production conditions accurately.

How does icaria Technology generate high-quality synthetic data?

To overcome these limitations, icaria Technology has developed a model-based synthetic data approach that ensures realistic, secure, and scalable datasets for high-quality testing environments. This approach allows us to create high-quality test data that mirrors real-world conditions without compromising security, compliance, or performance.

Advantages of icaria Technology’s synthetic data

Our approach to synthetic data offers significant advantages for software testing environments. By replicating the structure, patterns, and complexity of real-world data while ensuring the exclusion of sensitive information, this method strikes a balance between realism, scalability, and security. Here are some key benefits of using our synthetic data:

  • Realistic test scenarios with no privacy risks
    Maintains relationships, distributions, and behaviors from real-world datasets without exposing PII. By generating this data from pre-existing models, we ensure that test environments mirror production scenarios.
  • Consistency across testing stages
    Ensures smooth transitions between development, staging, and production phases by preserving referential integrity and data relationships.
  • Scalability and flexibility
    Generates large volumes of test data tailored to specific needs, supporting extensive performance and scalability tests.
  • Customizable for testing requirements
    Allows the generation of datasets designed for edge cases, rare scenarios, or new application features.
  • Cost efficiency
    Reduces manual effort and minimizes rework costs through automated processes, ultimately saving resources during the testing lifecycle.

When to use real-world data and when synthetic data?

After reviewing what real-world data is and our definition of synthetic data, the question arises: which one should we use in testing?

Real-world data is the best option for testing due to its richness and complexity, accurately reflecting system behavior and user interactions. Since this data already exists, it is often more efficient to use it rather than generating new datasets, which can introduce additional challenges and complexities.

However, this does not mean synthetic data has no place in a robust testing strategy. In certain situations, our synthetic data approach can be particularly useful, such as:

  • When testing requires data that is not yet available in existing application environments. For instance, during new developments involving changes to the application’s data model, there will be no existing data for the new model, necessitating synthetic data generation.
  • When specific datasets are rare but essential for testing. Some scenarios occur infrequently, meaning only one or two real-world examples exist. In these cases, synthetic data can generate additional instances, ensuring all testers and developers have access to the necessary data.

The perfect combination for reliable testing with icaria TDM

In the high-complexity environments managed by icaria Technology, particularly in icaria TDM, the reality is significantly more complex. These applications function in mission-critical domains where the margin for error is nonexistent.

By combining real-world data with synthetic data, organizations can create a balanced and efficient approach to test data management that ensures accuracy, compliance, and scalability.

Choosing the right type of data for each scenario, or combining both, helps companies improve test quality, comply with regulations, and optimize resources. With icaria TDM, achieving this balance has never been easier. This approach not only enhances testing efficiency but also strengthens confidence in systems, ensuring applications meet the highest quality standards before deployment.

Author

Enrique Almohalla

Enrique Almohalla, leading icaria Technology as CEO, brings a wealth of experience in TDM methodologies, cultivated through over twenty years of directing software development and testing projects. His significant involvement in Test Data Management, marked by continuous innovation and application, underscores his deep understanding of the field. Additionally, his position as an associate professor at IE Business School in Operations and Technology melds his hands-on experience with academic insights, offering a comprehensive perspective on business management.



iCaria are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Big Data, EuroSTAR Conference Tagged With: EuroSTAR Conference

Understanding Model-Based Testing: Benefits, Challenges, and Use Cases

March 17, 2025 by Aishling Warde

For test engineers seeking a systematic and organized approach to testing, model-based testing offers a powerful toolset. This method involves working with models that guide the testing process.

Besides creating models of tests, you can model, for example, application behavior, application structure, data, and environment. In this article, our core focus will be on testing – so, thinking about what aspects to test and how to do that drives the modeling.

Let’s delve deeper into what model-based testing entails, its benefits, challenges, and scenarios where it is most effective.

What is Model-Based Testing?

Model-based testing is a testing approach that revolves around the use of models. Unlike traditional testing, which involves scrutinizing every intricate detail, model-based testing takes a more general approach. It allows you to concentrate on the core functionalities without being bogged down by all the little details.

Let’s take an example – say that you’re testing an address book application. In this case, you could model the following actions:

• Start the application
• Create a new file
• Add contacts
• Remove contacts
• Save the file
• Open the file
• Quit the application

The idea is not to model the whole application, as a developer would, but rather to get a grasp of the test cases you need to prioritize. This will help in organizing your test cases and in the end your test scripts, which can then be used for automating the test cases.

Benefits of Model-Based Testing

  1. Helps focus on the things that matter
    By focusing on high-level abstractions, model-based testing helps you avoid getting lost in the details. This strategic approach allows you to skip unnecessary test cases, optimizing testing efforts and resources.

Ultimately, this leads to higher-quality tests that accurately represent critical functionalities.

  1. Makes communication easier
    Models help in finding a common understanding of the requirements and detecting potential misunderstandings. They make it easier to convey testing needs to both internal and external stakeholders.

For example, with models, you could show the management what your test process looks like and why additional resources are needed. Or you could explain to the development team how you’re currently testing and discuss why something is not working as it should.

The visual aid that models offer is often more effective than discussing the problems verbally or looking at abstract test scripts.

Better communication in the early stages of the development process also leads to early detection of bugs – our benefit number 3.

  1. Avoid defects in the early stages of the product
    In the traditional development process, the steps of requirements, design, and testing are performed sequentially using a variety of tools. As testing is the final stage, most defects – accumulated throughout the previous stages – are caught quite late in the process. This makes fixing them time-consuming and costly.

Model-based testing is one methodology further enabling so-called shift-left testing. This refers to the shift in the timeline – testing can begin already at the requirement phase.


Models can be shared with project stakeholders, before the implementation, to verify requirements and to identify gaps within the requirements. It might also reveal a problem area if you cannot model something.

As a result, defects are caught and removed earlier, lowering the total cost of development. According to MathWorks, the savings can range from 20 to 60% when compared with traditional testing methods.

  1. Effort reduction in implementation and maintenance

While modeling requires initial effort, it significantly reduces the effort needed for implementation and maintenance.

Model-based testing utilizes the modularization of test cases. In the case of traditional testing, when some element of your application changes, you might have to change every individual test case. With model-based testing, you can use the building blocks, like Legos, and fixing one single block will bring all your test cases up to date.

Also, there are time-saving benefits as you learn to operate in a more organized way. You can detect the highest priority tests – and avoid any redundant work.

Challenges of Model-Based Testing

  1. Mindset transition

Transitioning from a traditional testing process to model-based testing requires a period of adjustment and learning.

  1. Specific skill set required

Not all test engineers may be proficient in abstract modeling. Creating effective models demands skills such as abstract thinking and generalization. To succeed, you need to keep a bird’s eye view of the whole testing process.

  1. Abstraction level challenge

Selecting the right level of abstraction is crucial. Too abstract, and tests may become less useful; too detailed, and the model may be challenging to work with.

However, abstraction inherently involves simplification and can lead to the loss of critical details, potentially overlooking important aspects.

When to Choose Model-Based Testing?

While model-based testing is a powerful tool, it may not be suitable for every scenario. If you’re dealing with a straightforward application, it may be overkill, potentially leading to over-engineering.

However, for complex software systems and teams capable of working at abstract modeling levels, model-based testing proves invaluable.

Conclusion

Model-based testing is a powerful approach that empowers test engineers to focus on testing the critical aspects of the application under test. By leveraging models as high-level abstractions, teams can enhance test quality, reduce effort, and improve communication.

While it requires a shift in mindset and specific skills, the benefits far outweigh the challenges, particularly in complex software environments. As with any testing methodology, the key lies in thoughtful application and adaptation to suit specific project needs.

In the second part of this article we dive into model-based testing best practices and testing tools. Here you will find a real world example on how to achieve model-based testing in Squish.

Author

Sebastian Polzin

Sebastian Polzin, Product Marketing Manager,
Qt Group, Software Quality Solutions



Qt Group are Gold Partners in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: EuroSTAR Conference, Gold, Software Testing Tagged With: EuroSTAR Conference, software testing tools

How GenAI is Shaping the Future of Software Testing

March 12, 2025 by Aishling Warde

As software development accelerates, Quality Assurance teams are facing unprecedented pressure to deliver both speed and quality. Despite exponential innovation in software development over the past couple of decades, QA teams still seem to be grappling with the fact that testing is not happening at the same speed development happens. A recent Testsigma webinar surveying 300 QA professionals revealed that 57% consider time management—building, executing, and maintaining tests—their biggest challenge.

The Challenges Holding Us Back

Modern QA teams face several persistent challenges that slow down testing processes:


Slow Testing Cycles
Testing remains bottlenecked by manual-heavy processes and an overemphasis on building perfect automation frameworks. This traditional approach cannot match the speed of modern development cycles.


High Maintenance Overhead
Traditional test automation requires constant script updates, which create significant maintenance costs and technical debt and divert resources from actual testing activities.


Insufficient Test Coverage
Human-defined test cases often fail to anticipate all edge cases, leading to undetected defects. This limitation becomes more pronounced as software complexity increases.

Why Do These Challenges Exist?

A couple of key factors contribute to this misalignment:

  • Manual testing—Don’t get me wrong—manual testing is not a bad word at all, as some vendors make it out to be. One can never hope to have 100% automation and anyone claiming that is a snake oil salesperson. But, having said that, automation is one of the best things that has happened to the software testing world. The idea is to use automation as a tool to solve everyday tasks at scale.
  • Automation as the silver bullet – This is not too unrelated to the above point, that automation is being considered the one solution to all of software testing problems. The industry’s push toward automation has had unexpected consequences. Many skilled testers were forced to become programmers or rely heavily on development teams for test automation. This shift often came at the expense of core testing competencies: domain knowledge and user behavior understanding.

The lure of code-driven testing

The world has come long since the first line of code was written. It has truly overhauled our world, and the possibilities it has brought about are endless. While code has revolutionized our world, the heavy focus on code-driven automation has overshadowed essential testing skills. The industry has drifted away from business-driven and user-behavior-driven testing approaches.

The shift to business-driven testing

A promising trend we are observing at Testsigma is the shift to business-driven testing and technology usage as a means to an end. With the advent of codeless technologies and GenAI, it is becoming increasingly easy for software testing teams to automate without having to build a test automation framework that takes months, and without writing code to actually script test cases. This allows testers to focus on their core strengths: domain expertise and user understanding.

GenAI-powered testing

Generative AI, combined with truly codeless test automation, offers new possibilities for rapid testing without coding requirements. Powerful tools like Testsigma Copilot allow testers to:

  • Focus on core understanding of the business and user, and not necessarily on learning the technology used to build frameworks.
  • Use prompt engineering to provide business context, to make the testing itself better
  • Guide AI systems to think from a user’s perspective, and use AI to uncover edge cases that might otherwise get missed

While using GenAI to generate test cases and test scenarios seems to win half the battle, the magic lies in codelessly being able to automate them as well. And that’s where truly codeless test automation platforms like Testsigma help, as one can perform end-to-end test automation at scale without writing a single word of code. With agentic execution, the deal gets sweeter as AI optimizes for the right test cases to be executed, and across the right resources, while self-healing tests to account for any changes that might have been shipped.

Pitfalls in GenAI-driven test automation

It is crucial to remember that, just like any piece of technology, GenAI is a tool. And as the saying goes, a fool with a tool is still a fool. The ones that are able to get the tool to work for them are the ones that win. Mastering the tool itself is not the goal, but how to make it work for the business is.

The other risk in GenAI-driven test automation is that the quality completely relies on the inputs we provide. Again, this is a golden opportunity for software testers, as it forces us to think like users, which is the basic trait of any software tester anyway.

Looking ahead

The future of software testing lies not in writing more automation scripts, but in leveraging AI to handle complexity while humans focus on customer needs and business value. This shift represents an opportunity for software testing teams to:

  • Return to business-centric and customer-centric testing approaches
  • Reduce technical barriers to effectively democratize testing and make the industry inclusive
  • Enable faster, smarter, and more comprehensive quality assurance, to empower software engineering teams to release quality software with confidence!

As software complexity continues to grow, the industry must embrace solutions that streamline testing while keeping quality control in the hands of those who best understand the product and its users.

Author

Narain Muralidharan

Narain Muralidharan is the Director of Product Marketing at Testsigma. Prior to Testsigma, Narain has led various marketing teams at SaaS unicorns like BrowserStack and Freshworks.

Testsigma are Gold Partners in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: EuroSTAR Conference, Software Testing Tagged With: EuroSTAR Conference, software testing conference

How to Measure the Value of Software Testing EXPO Trade Show Participation

September 11, 2024 by Lauren Payne

Participating in trade shows like the EuroSTAR Software Testing EXPO can be a significant investment of time, money, and resources. However, the potential benefits, including increased brand visibility, lead generation, and networking opportunities, can be substantial. To ensure that your participation is worthwhile, it is crucial to measure the value derived from the event. This blog post outlines Clare’s recommended key metrics and strategies to help you evaluate the return on investment (ROI) from participating in the EuroSTAR Software Testing EXPO.

Set Clear Objectives

Before delving into metrics, it’s essential to define your goals for participating in the trade show. These goals should align with your overall marketing strategy. Common objectives include:

  • Lead Generation: Capturing contact information of potential clients.
  • Brand Awareness: Increasing visibility and recognition of your brand.
  • Networking: Building relationships with industry peers and potential partners.

Setting clear objectives when participating in an EXPO helps us ensure we are enabling you to focus efforts, measure success, allocate resources effectively, and enhance overall strategic alignment with business goals. Clear objectives are helpful when we are helping you pick an EXPO package that best suits your business goals.

Pre-Event Preparations

To measure the value effectively, start tracking metrics before the event:

  • Budget Allocation: Document all expenses related to the event, including booth costs, travel, accommodation, marketing materials, and promotional items.
  • Marketing Reach: Assess your pre-event marketing efforts, such as email campaigns, social media promotions, and blog posts. Depending on which EXPO package you have opted for there may be pre-conference marketing activations that will also help boost your brands attendance and visibility.

Lead Generation

One of the primary reasons for attending the EXPO is to generate leads. Collecting leads is a very important part of exhibiting as well ad forging connections and networking. Key metrics to track include:

  • Number of Leads Collected: Count the total number of leads gathered during the event.
  • Lead Quality: Evaluate the quality of leads based on criteria such as job titles, company size, and level of interest.
  • Lead Conversion Rate: Track how many leads convert into actual sales or follow-up meetings, this is dependant on the actions taken post event.

Brand Awareness & Engagement

Increasing brand awareness and engagement is another crucial objective. Things to consider:

  • Booth Traffic: Monitor the volume of visitors to your booth each day, try different initiatives to encourage footfall and get involved in the EuroSTAR EXPO Passport around the EXPO. Some other ideas include running a competition or have a testing challenge.
  • Social Media Engagement: Track mentions, shares, likes, and comments on your social media posts related to the event. Always be sure to share your participation in the event. This helps build awareness and visibility for your brand
  • Media Coverage: Monitor any press coverage or mentions in industry publications resulting from your participation.
  • Swag: Keep your brand at the forefront of attendee’s minds with a cool piece of conferences swag for them to take home. Our attendees love conference swag.

Market Research & Networking

Understanding the software testing industry trends and building relationships can be invaluable. Things to be conscious of:

  • Competitive Analysis: Mingle with other exhibitors in the EXPO. Having conversations helps gain insights about other companies products and services, challenges, successes, pricing, and strategies.
  • Partnership Opportunities: Count the number of potential partnership discussions initiated.
  • Feedback and Insights: Collect feedback from conversations with attendees and industry experts to identify trends and areas for improvement.

Post-Event Follow-Up

Effective follow-up is critical to maximising the value of EXPO participation:

  • Timely Follow-Up: Ensure that leads are contacted promptly after the event.
  • Nurturing Campaigns: Implement nurturing campaigns to keep leads engaged and move them through the sales funnel.
  • Feedback Surveys: Conduct surveys to gather feedback from attendees and improve future participation.

Conclusion

Measuring the value of your participation in the EuroSTAR Software Testing EXPO requires a systematic approach and a focus on relevant metrics. By setting clear objectives from the beginning, tracking key metrics, and continuously improving your strategy, you can ensure that your investment in the EuroSTAR Software Testing EXPO will deliver substantial returns. With diligent measurement and follow-up, you can leverage EXPO participation to boost your brand, generate quality leads, and drive business growth.

To find out how you can achieve your marketing goals and more at a EuroSTAR Conferences EXPO, speak with Clare.

Clare Burke

EXPO Team, EuroSTAR Conferences

With years of experience and a passion for all things EuroSTAR, Clare has been a driving force behind the success of our EXPO. She’s the wizard behind the EXPO scenes, connecting with exhibitors, soaking up the latest trends, and forging relationships that make the EuroSTAR EXPO a vibrant hub of knowledge and innovation. 

t: +353 91 416 001 
e: clare@eurostarconferences.com 

Filed Under: EuroSTAR Conference, EuroSTAR Expo, Software Testing, Sponsor Tagged With: EuroSTAR Conference, Expo

Top 10 Quality Issues to Solve at EuroSTAR 2024

April 23, 2024 by Lauren Payne

As we approach another EuroSTAR in Stockholm, many of us in IT and testing are reflecting on how we can improve our processes and strategies. It will be halfway through 2024, a time of year when doubts and concerns can creep in about our testing goals and improvements. 

As you review your software quality strategy, I’d like you to reconsider our impulse towards ever-increasing test automation. Are we falling into the trap of trying to eat faster to lose weight? By only accelerating our efforts, we fail to confront the real root causes of testing inefficiencies and bugs.

You can’t automate quality into software

Just as diet fads promise thinness through gimmicks, we’ve been sold a fantasy. It promises us that more test automation will solve all our quality problems. But, while judicious automation provides value, many teams over-invest in automation at the cost of broader quality blockers. 

When you have a hammer, everything looks like a nail, so teams hammer away endlessly to construct vast automated architectures. Meanwhile, quality lingers at the same mediocre levels.

10 Software Quality Issues to Address at EuroSTAR 2024

A common set of fundamental issues plague software projects. Teams often cite problems like:

  1. Confidence and Stability – Frequent defects erode trust in releases
  2. Defects into Production – Poor protection of live environments
  3. Insufficient Test Time – Perpetual last minute “hardenings”
  4. Release Uncertainty – Go/no-go decisions go down to the wire
  5. Failing Requirements – Poorly defined scope leads to endless clarifications
  6. Developer Rework – High levels of unplanned work
  7. Team Misalignment – Lack of transparency across functional groups
  8. Knowledge Silos – Bottlenecks form around key people or tools
  9. Bloated Testing – Massive, unwieldy automation suites requiring heavy maintenance
  10. Technical debt – Volumes of (re)work build over time, with insufficient knowledge to tackle it

Rather than focus on accelerating test execution speed, we need to confront why these problems arise in the first place. Increasing execution automation acts as a bandage; quality gaps stem from deeper process and strategy issues.

From silver bullets to software quality

At EuroSTAR 2024, let’s resolve to understand these root causes and thoughtfully solve them. For example, what drives unstable requirements? Is our analysis happening too late? What drives last minute surprises? Are we integrating and testing incrementally? Do our teams have transparency to coordinate their efforts? Are our tools and environments configured efficiently?

Thoughtful process analysis and improvement is less flashy than automation. Yet, it is far more impactful. Techniques like value stream mapping can uncover waste and barriers. Then, we can apply lean principles like limiting work in progress, optimizing flow, and amplifying feedback loops.

Rather than mindlessly generate more test cases, we should carefully curate automated checks to maximise value. Shifting left helps prevent defects, while good pipelines and test data strategies better isolate changes to fail fast. Teams skilled in exploratory testing and bug advocacy can further spotlight weaknesses early.

A measured (and measurable) approach to software quality

Let’s ring in EuroSTAR 2024 with renewed discipline against reactive thinking. Measure first, understand next, then optimize sustainably. Partner with stakeholders to align priorities. Anchor automation in business needs, not false promises of all-encompassing test suites. Spend smart to conserve budget for high-impact interventions.

Test excellence comes not from hasty automation, but thoughtful rigor, transparency, and accountability. Progress may seem slower, but leads to stable, high-velocity teams. Development, testing, and operations must come together as one delivery team sharing data, tools, and practices.

By taking a measured, evidence-based approach, we can target the disease rather than just treat the symptoms. Just as sustainable diets come from lifestyle changes, let’s commit to curing our quality ills through systems thinking. 

This year, at EuroSTAR, let’s fix the fundamentals. Our automation will still be there to serve us, at sustainable velocities and capacities serving downstream needs. Set aside reactionary tactics, and instead bank quality through proactive strategies. Another EuroSTAR brings new perspectives, if we remain open to self-reflection and growth.

Restoring Confidence and Alignment with Curiosity Modeller

I speak to many organizations who experience the recurring quality issues and process misalignments discussed in this blog, each eroding their release confidence.

These challenges all have common roots:

1.     Lack of transparency;

2.     Incomplete system comprehension;

3.     Inadequate feedback loops;

4.     Unconnected teams. 

Too often software gets built fast then tested slow. Teams lack shared artifacts to capture decisions and expected behaviours, undermining unified understanding.

Curiosity Modeller tackles these systemic issues by making system behaviour explicit early through collaborative models. These living models form the core artifact driving understanding, alignment and test generation.

Curiosity Modeller restores confidence and release quality by:

  • Visualizing expected functionality clearly across groups – no more hidden assumptions or differing interpretations of requirements.
  • Auto-generating optimal test cases to validate actual vs intended behaviour – preventing defects via early testing and signalling.
  • Producing regenerative tests tied directly to the models – no more realigning stale regression suites or maintaining copious test automation artifacts.
  • Enabling behaviour simulation for rapid prototyping – failing fast to prevent downstream rework.
  • Integrating with test execution and auto-generating Test Automation – overcoming misalignment, endless maintenance and skills silos.
  • Supporting API testing to safely exercise business logic – going beyond fragile end user flows.
  • Generating high-value test data to focus coverage on key scenarios – informed by risk models.

Shift left to deliver quality

Instead of intensifying downstream testing, Curiosity Modeller shines a light starting left in the lifecycle. Visual flows form the central artifact aligning groups on system behaviour, while preventing defects before code gets written. This proactive approach restores trust, accelerates releases, facilitates coordination and uplifts quality engineering. It delivers confidence through deep comprehension.

Find us at EuroSTAR 2024!

The Curiosity team will be in the EuroSTAR Expo hall in Stockholm – drop by to discuss how you can build software confidence early and throughout your delivery pipeline. Before then, why not head to our website to learn more about Curiosity Modeller, try it for yourself, and talk to us about your quality needs?

Author


Rich Jordan

Rich Jordan has spent the past 20 years leading change within the testing industry, primarily within Financial Services. He has led enterprise transformations and quality teams who have won awards in both Testing and DevOps categories. Rich has been an advocate of model-based test automation and test data innovation for over a decade, and joined Curiosity in November 2022.

Curiosity Software is an Exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: EuroSTAR Conference Tagged With: EuroSTAR Conference, Expo

Power Up Your Test Automation Practices With AI: Unlock Key Use Cases

April 9, 2024 by Lauren Payne

With the rapid pace of development cycles and the complexity of modern software systems, manual testing alone often can’t meet the demands of quality assurance. This is where test automation comes into play, offering efficiency, accuracy, and scalability. 

However, even with automation, challenges can still arise, such as maintaining test scripts, handling dynamic user interfaces, and detecting subtle defects. Enter AI, a game-changer poised to revolutionize test automation.

By infusing AI and ML into test automation, testers can build better automations faster through supercharged productivity, as well as improve accuracy and time-to-value through combining Generative AI and Specialized AI. Plus, testers can unlock new use cases by building AI-powered automations. 

So, what are some of the top uses for AI and ML in testing that can supercharge your application testing practices?

Deploy an agent that performs testing fully autonomously

An AI-powered agent can seamlessly tackle the challenge of finding critical problems in your applications, as it can interact with an application constantly. Then, the agent can build a model of your application, discover relevant functionality, and find bugs related to performance, stability, and usability. An agent can also aid in creating a resilient object repository while navigating through a target application, gathering reusable controls for future test case development. The potential of AI doesn’t stop there—the agent can then continuously verify and refresh controls within an object repository, enabling self-healing and maintaining automated tests. 

Generate automated low-code and coded tests from step-by-step manual tests

Have manual tests that you want to convert to automated tests? With the power of AI, you can accelerate automation by generating automated low-code and coded tests from manual tests, as well as leverage a flexible automation framework to ensure the resilience of your automated tests. And remember the object repository that your AI-fuelled agent assisted with creating? Equipped with this object repository, you can use AI to consider and smartly reuse any kind of object, such as buttons, tables, and fields.

Create purposeful and complex test data

With AI-infused large language models, you can supercharge your data through enhanced synthetic test data generation for manual and automated test cases. Using AI also enables you to create meaningful test data faster, allowing you to handle intricate data dependencies across multiple test data dimensions.

Streamline automated localization testing by leveraging semantic translation

By integrating AI into your test automation practices, you can leverage semantic automation and translation to remove the need for creating separate test cases for each language. The result? Maximized efficiency through seamless automated localization testing. Plus, you can run your automated test cases in different languages, allowing you to expand and scale your testing capabilities globally.

Overall, there’s unlimited potential for AI to supercharge continuous testing across the entire lifecycle—from defining stories, to designing tests, to automating and executing tests, to analyzing results.

UiPath Test Suite for AI-powered test automation

UiPath Test Suite, the resilient testing solution powered by the UiPath Business Automation Platform, offers production-grade, AI-fueled, low-code, no-code, and coding tools so you can automate testing for any technology while still managing testing your way. Later this year, you’ll be able to unlock AI-infused use cases for test automation, such as test generation, coded automations, and test insights, with Autopilot for Test Suite.

Author


Sophie Gustafson, Product Marketing Manager, UiPath Test Suite

Sophie Gustafson has worked at UiPath for two years and is currently a product marketing manager for Test Suite. Sophie has previous experience working in the consulting and tech industries, specializing in content strategy, writing, and marketing.

UiPath is an EXPO Platinum Partner at EuroSTAR 2024, join us in Stockholm.

Filed Under: EuroSTAR Conference, EuroSTAR Expo, Platinum, Sponsor, Test Automation Tagged With: 2024, EuroSTAR Conference, Expo, Test Automation

Myth vs. Reality: 10 AI Use Cases in Test Automation Today

March 5, 2024 by Lauren Payne

For decades, the sci-fi dream of simply speaking to your device and having it perform tasks for you seemed far-fetched. In the realm of test automation and quality assurance, this dream is inching closer to reality. With the evolution of generative AI, we’re prompted to explore what’s truly feasible. Embedding AI into your quality engineering processes becomes imperative as IT infrastructures become increasingly complex and integrated, spanning multiple applications across business processes. AI can help alleviate the daunting tasks of knowing what to test, how to test it, creating relevant tests, and deciding what type of testing to conduct, boosting productivity and business efficiency.

But what’s fact and what’s fiction? The rapid evolution of AI makes it hard to predict its capabilities accurately. Nevertheless, we’ve investigated the top ten key AI use cases in test automation, distinguishing between today’s realities and tomorrow’s aspirations.

1. Automatic Test Case Generation

Reality: AI can generate test cases by analyzing user stories along with requirements, code, and design documents, including application data and user interactions. For instance, large language models (LLMs) can interpret and analyze textual requirements to extract key information and identify potential test scenarios. This can be used with static and dynamic code analysis to identify areas in the code that present potential vulnerabilities requiring thorough testing. Integrating both requirement and code analysis can help generate potential manual test cases that cover a broad set of functionalities in the application.

Myth: But here’s the caveat: many tools on the market that enable automated test case generation create manual tests. They are not automated. To create fully automated, executable test cases is a use case that remains a myth and still requires further proof. Additionally, incomplete, ambiguous, or inconsistent requirements may not always generate the right set of tests, and this requires further development. Test cases may not always cover edge cases or highly complex scenarios, nor are they able to cover completely new applications. Analysing application and user interaction data may not always be possible. As a result, human testers will always be required to check the completeness and accuracy of the test suites to consider all possible scenarios.

2. Autonomous Testing

Reality: Autonomous testing automates the automation. Say what? Imagine inputting a prompt into an AI model like “test that a person below the age of 18 is not eligible for insurance.” The AI would then navigate the entire application, locate all relevant elements, enter the correct data, and test the scenario for you. This represents a completely hands-off approach, akin to Forrester’s level 5 autonomous state.

Myth: But are we there yet? Not quite, though remarkable technologies are bridging the gap. The limitation of Large Language Models (LLMs) is their focus on text comprehension, often struggling with application interaction. For those following the latest in AI, Rabbit has released a new AI mobile phone named r1 that uses Large Action Models (LAMs). LAMs are designed to close this interaction gap. In the realm of test automation, we’re not fully there. Is it all just hype? It’s hard to say definitively, but the potential of these hybrid LAM approaches, which execute actions more in tune with human intent, certainly hints at a promising future.

3. Automated Test Case Design

Reality: AI is revolutionising test case design by introducing sophisticated methods to optimise testing processes. AI algorithms can identify and prioritise test cases that cover the most significant risks. By analyzing application data and user interactions, the AI can determine which areas are more prone to defects or have higher business impact. AI can also identify key business scenarios by analysing usage patterns and business logic to auto-generate test cases that are more aligned with real-world user behaviors and cover critical business functionalities. Additionally, AI tools can assign weights to different test scenarios based on their frequency of use and importance. This helps in creating a balanced test suite that ensures the most crucial aspects of the application are thoroughly tested.

Myth: However, AI cannot yet fully automate the decision-making process in test suite optimisation without human oversight. The complexity of certain test scenarios still requires human judgment. Moreover, AI algorithms are unable to auto-generate test case designs for new applications, especially those with highly integrated end-to-end flows that span across multiple applications. This capability remains underdeveloped and, for now, is unrealised.

4. Testing AI Itself

Reality: As we increasingly embed AI capabilities into products, the question evolves from “how to test AI?” to “how to test AI, gen AI, and applications infused with both?” AI introduces a myriad of challenges, including trust issues stemming from potential problems like hallucinations, factuality issues, and explainability concerns. Gen AI, being a non-deterministic system, produces different and unpredictable outputs. Untested AI capabilities and AI-infused applications can lead to multiple issues, such as biased systems with discriminatory outputs, failure to identify high-risk elements, erroneous test data and design, misguided analytics, and more.

The extent of these challenges is evident. In 2022, there were 110 AI-related legal cases in the US, according to the AI Index Report 2023. The number of AI incidents and controversies has increased 26-fold since 2021. Moreover, only 20% of companies have risk policies in place for Gen AI use, as per McKinsey research in 2023.

Myth: Testing scaled AI systems, particularly Gen AI systems, is unexplored territory. Are we there yet? While various approaches and methodologies exist for testing more traditional neural network systems, we still lack comprehensive tools for testing Gen AI systems effectively.

AI Realities in Test Automation Today

The use cases that follow are already fully achievable with current test automation technologies.

5. Risk AI

It’s a significant challenge for testers today to manage hundreds or thousands of test cases without clear priorities in an Agile environment. When applications change, it raises critical questions: Where does the risk lie? What should we test or prioritize based on these changes? Fortunately, risk AI, also known as smart impact analysis, offers a solution. It inspects changes in the application or its landscape, including custom code, integration, and security. This process identifies the most at-risk elements where testing should be focused. Employing risk AI leads to substantial efficiency gains in testing. It narrows the testing scope, saving considerable time and costs, all while significantly reducing the risk associated with software releases.

6. Self-Healing

By identifying changes in elements at both the code and UI layer, AI-powered tools can auto-heal broken tests after each execution. This allows teams to stabilize test automation while reducing time and costs on maintenance. Want to learn more about how Tricentis Tosca supports self-healing for Oracle Fusion and Salesforce Lightning and Classic? Watch this webinar.

7. Mobile AI

Through convolutional neural networks, mobile AI technology can help testers understand and analyze mobile interfaces to detect issues in audio, video, image quality, and object steering. This capability helps provide AI-powered analytics on performance and user experience with trend analysis across different devices and locations, helping to detect mobile errors rapidly in real time. Tricentis Device Cloud offers a mobile AI engine that can help you speed up mobile delivery. Learn more here.

8. Visual Testing

Visual testing helps to find cosmetic bugs in your applications that could negatively impact the user experience. The AI works to validate the size, position, and color scheme of visual elements by comparing a baseline screenshot of an application against a future execution. If a visual error is detected, testers can reject or accept the change. This helps improve the user experience of an app by detecting visual bugs that otherwise cannot be discovered by functional testing tools that query the DOM.

9. Test Data Generation

Test data generation using AI involves creating synthetic data that can be used for software testing. By using machine learning and natural language processing, you can produce dynamic, secure, and adaptable data that closely mimics real-world scenarios. AI achieves this by learning patterns and characteristics from actual data and then generating new, non-sensitive data that maintains the statistical properties and structure of the original dataset, ensuring that it’s realistic and useful for testing purposes.

10. Test Suite Optimisation

AI algorithms can analyze historical test data to identify flaky tests, unused tests, redundant or ineffective tests, tests not linked to requirements, or untested requirements. Based on this analysis, you can easily identify weak spots or areas for optimization in your test case portfolio. This helps streamline your test suite for efficiency and coverage, while ensuring that the most relevant and high-impact tests are executed, reducing testing time and resources.

What about AI’s role in performance testing, accessibility testing, end-to-end testing, service virtualization, API testing, unit testing, and compatibility testing, among others? We’ve only just scraped the surface and begun to explore the extensive range of use cases and capabilities that AI potentially offers today. Looking ahead, AI’s role is set to expand even further, significantly boosting QA productivity in the future.

As AI continues to evolve, offering tremendous benefits in efficiency, coverage, and accuracy, it’s important to stay cognizant of its current limitations. AI does not yet replace the need for skilled human testers, particularly in complex or nuanced scenarios. AI still lacks the human understanding needed to ensure full software quality. Developing true enterprise end-to-end testing spanning multiple applications across web, desktop, mobile, SAP, Salesforce, and more requires a great deal of human thinking and human ingenuity, including the capability to detect errors. The future of test automation lies in a balanced collaboration between AI-driven technologies and human expertise.

Want to discover more about Tricentis AI solutions and how they can cater to your unique use cases? Explore our innovative offerings.

Tricentis offers next-generation AI test automation tools to help accelerate your app modernisation, enhance productivity, and drive your business forward with greater efficiency and superior quality.

Author

Simona Domazetoska – Senior Product Marketing Manager, Tricentis

Tricentis is an EXPO Gold Sponsor at EuroSTAR 2024, join us in Stockholm

Filed Under: EuroSTAR Conference, Gold, Sponsor, Test Automation, Uncategorized Tagged With: 2024, Expo, software testing tools, Test Automation

  • Page 1
  • Page 2
  • Page 3
  • …
  • Page 15
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us