• Skip to main content
Year End Offer: Save 30% + Teams Save Up to an Extra 25%!

EuroSTAR Conference

Europe's Largest Quality Engineering Conference

  • Programme
    • Programme Committee
    • 2025 Programme
    • Community Hub
    • Awards
    • Social Events
    • Volunteer
  • Attend
    • Location
    • Highlights
    • Get Approval
    • Why Attend
    • Bring your Team
    • Testimonials
    • 2025 Photos
  • Sponsor
    • Sponsor Opportunities
    • Sponsor Testimonials
  • About
    • About Us
    • Our Timeline
    • FAQ
    • Blog
    • Organisations
    • Contact Us
  • Book Now

Aishling Warde

Share Your Story at EuroSTAR 2026

September 30, 2025 by Aishling Warde

The call for proposals for EuroSTAR 2026 in Oslo is currently open. It runs until October 3rd, so there are still a little under a week left, until it closes.

I am positively surprised about how the submissions are coming in: Extrapolating from my own submission behaviour, I expected to see hardly any submissions until very shortly before the deadline. Especially, since the call for proposals opened in August and is thus firmly in summer vacation territory.

It is thus a happy surprise for me that there is a constant stream of talk proposals coming in!

Please take this as an encouragement. If you have not submitted already, we are still waiting for your proposal!

If you have an idea for a talk, have a lesson that you want to share, a story that you want to tell, please take the time to flesh it out and submit it to EuroSTAR.

We are looking forward to receiving it!

Don’t miss your chance — submissions close on October 3rd.

Elmar Juergens

EuroSTAR 2025 Programme Chair

Filed Under: Uncategorized

Sponsorship at EuroSTAR26: What’s New, What’s Working and Why It’s Worth It

August 28, 2025 by Aishling Warde

The EuroSTAR EXPO is where the European testing community comes alive. After an unforgettable week in Edinburgh, we’re thrilled to announce that EuroSTAR 2026 is heading to Oslo, Norway, and the excitement is already building.

Year after year, sponsors ask us, “When are we going to Norway?” Well, it’s finally happening! From 15–18 June 2026, we’ll gather in Oslo for four days of networking, learning, and fun.

Brand New for 2026: Diamond Partner Package

We’re excited to introduce the Diamond Partner Package for EuroSTAR 2026 – a new sponsorship option created for those looking to showcase their expertise and connect directly with attendees during the conference. This exclusive option gives you:

  • Thought leadership visibility – Share your expertise on the main stage with a 30-minute talk and live Q&A.
  • Premium presence – A prime 5x5m stand ensures high foot traffic and maximum engagement with attendees.
  • More meaningful connections – 3 full-conference passes allow your team to attend sessions, network, and build long-term relationships.
  • Event-wide brand impact – Enhanced branding keeps your company front-of-mind throughout the conference.

With only 3 spots available, the Diamond Partner Package is a chance to showcase solutions, share expertise, and be at the forefront of discussions within the testing community.

What’s Working: Why Sponsors Keep Coming Back

When you invest in the EuroSTAR EXPO, you’re aiming for more than just foot traffic – you want quality leads, stronger brand visibility, and real opportunities to grow your customer base. That’s exactly what sponsors are achieving:

  • Live Demos That Drive Interest: In 2025, demos attracted 700+ attendees, sparking meaningful product conversations and follow-ups.
  • Leads That Convert: Over 5,000 leads scanned last year, with 92% of delegates using the app to connect directly with sponsors.
  • Networking That Builds Partnerships: The EXPO floor, dedicated networking breaks, and evening socials help sponsors create lasting relationships with decision-makers.
  • Onsite Rebookings: A third of exhibitors were so satisfied they reserved their space for 2026 before the 2025 event even ended – a clear sign of strong ROI.

“It’s been really good in terms of lead generation, in terms of the numbers, but also the quality of conversations.” – Joshua England, Curiosity Software

Why It’s Worth It: Reach the Right People, Achieve Real Impact

Success at EuroSTAR isn’t just about visibility – it’s about being in the right room with the right people. Decision-makers, influencers, and future customers are all here:

  • Direct Access to Buyers: 26% of attendees are C-suite or senior management, with another 30% in manager or team lead roles – the people who make purchasing decisions.
  • Your Target Market in One Place: Connect with professionals from 40+ countries, 150+ companies, and key industries including finance, healthcare, telecoms, and technology.
  • European Reach with a Nordic Focus: With 50% of attendees from Nordic countries, EuroSTAR 2026 in Oslo offers unmatched access to this growing market.
  • Thought Leadership Opportunities: Main stage talks and session participation position your brand as a trusted leader in testing and quality engineering.
  • Sustainable Growth: Sponsors regularly return because the relationships built at EuroSTAR lead to partnerships, sales, and long-term brand recognition.

“The individuals here are of the highest calibre. The type of companies and people we connect with has grown our organization here in EMEA.” – Xander Lien, ACCELQ

🚀 Secure Your Place in Oslo

EuroSTAR 2026 is shaping up to be one of our most exciting conferences yet. With premium EXPO spaces filling fast and just 3 Diamond Partner packages available, now is the perfect time to position your brand as a leader in the European testing community.

Join us in Oslo for four unforgettable days of learning, networking, and showcasing the future of testing. Let’s make EuroSTAR 2026 your most impactful event yet.

👉 Book your stand today!

Author

Clare Burke

EXPO Team, EuroSTAR Conferences


With years of experience and a passion for all things EuroSTAR, Clare has been a driving force behind the success of our EXPO. She’s the wizard behind the EXPO scenes, connecting with exhibitors, soaking up the latest trends, and forging relationships that make the EuroSTAR EXPO a vibrant hub of knowledge and innovation.


t: +353 91 416 001
e: clare@eurostarconferences.com

Filed Under: EuroSTAR Expo Tagged With: EuroSTAR Conference

One Platform, Endless Possibilities: Introducing BrowserStack Test Platform 🚀

May 26, 2025 by Aishling Warde

Software testing has evolved. Engineering teams today are navigating an increasingly complex landscape—tight release cycles, growing test coverage demands, and the rapid adoption of AI in testing. But fragmented toolchains and inefficiencies slow teams down, making it harder to meet quality expectations at speed.

We believe there’s a better way.

Today, we’re thrilled to introduce the BrowserStack Test Platform—an open, integrated and flexible platform featuring AI-powered testing workflows that enable users to simplify their toolchain into a single platform, eliminating fragmentation, reducing costs, and improving productivity. Built to enhance efficiency, the Test Platform transforms how teams approach quality, delivering up to 50% productivity gains while expanding test coverage.

The Challenge: Fragmentation Meets AI

Traditionally, QA teams have had to juggle disconnected tools for test automation, device coverage, visual regression, performance analysis, accessibility compliance, and more. The result? Fractured workflows, hidden costs, and a lot of context switching.

We wanted to change that. Our goal was to bring every aspect of testing—across web, mobile, and beyond—under one roof, complete with AI-driven intelligence, detailed analytics, and robust security features. By unifying the testing process, teams can dramatically improve productivity, reduce costs, and focus on delivering what truly matters: stellar digital experiences.

Introducing BrowserStack Test Platform

1. Faster Test Cycles with Test Automation

  • Enterprise-grade infrastructure for browser and mobile app testing—run tests in the BrowserStack cloud or self-host on your preferred cloud provider. This helps improve automation scale, speed, reliability, and efficiency.
  • AI-driven test analysis, test orchestration, and self-healing to pinpoint and fix issues faster.
  • Designed to maximize the ROI of test automation, freeing you to focus on innovative work instead of manual maintenance.

2. BrowserStack AI Agents

  • The platform’s AI Agents transform every aspect of the testing lifecycle, from planning to validation.
  • With a unified data store, AI Agents gain rich context, helping teams achieve greater testing accuracy and efficiency.
  • Automate repetitive tasks, identify flaky tests, and optimize testing workflows seamlessly.

3. Comprehensive Test Coverage

  • 20,000+ real devices and 3,500+ browser-desktop combinations to replicate actual user conditions.
  • Advanced accessibility testing ensures compliance with ADA & WCAG standards.
  • Visual testing powered by the BrowserStack Visual AI Engine to spot even minor UI discrepancies.

4. Test & Quality Insights

  • A single-pane executive view for all your QA metrics, integrated into the Test Platform.
  • Test Observability and AI-powered Test Management streamline debugging and analytics.
  • Data-driven insights to help teams make informed decisions and continuously refine their testing strategies.

5. Open & Flexible Ecosystem

  • Uniform workflows and a consistent user experience reduce context switching.
  • 100+ integrations for CI/CD, project management, and popular automation frameworks, letting you plug and play with your existing toolchain.
  • Built for any tech stack, any team size, and any testing objective—no matter how unique.

Built for Developers, by Developers

Our team of 500+ developers has poured their expertise into building a platform that eliminates friction from the testing process. From zero-code integration via our SDK to enterprise-grade security, private network testing, and unified test monitoring—every feature has been designed with one goal in mind: making testing seamless.

The Future of Testing Starts Here

The BrowserStack Test Platform is more than just a product launch—it’s a paradigm shift in how engineering teams think about software quality. Whether you’re a developer, tester, or QA leader, this platform is designed to help you build the test stack your team wants.

Ready to transform your testing workflows? Explore the BrowserStack Test Platform.

Author

Kriti Jain – Product Growth Leader

Kriti is a product growth leader at BrowserStack and focuses on central strategic initiatives, particularly AI. She has over ten years of experience leading strategy and growth functions across diverse industries and products.

BrowserStack are Gold Sponsors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Gold, Sponsor Tagged With: EuroSTAR Conference

Level Up Your Career: Why Professional Certifications in Software Quality Matter

May 21, 2025 by Aishling Warde

🎮 Imagine you’ve put in the work—conducting rigorous testing, analyzing systems, and ensuring exceptional software quality. But in a competitive industry, expertise alone won’t always be enough to set you apart. Employers and clients seek tangible proof of your skills, and that’s where professional certifications make the difference. They validate your expertise, strengthen your credibility, and unlock new career opportunities. It’s not just about what you know—it’s about demonstrating your mastery to the world.

Why Certifications? Because Expertise Deserves Recognition!

You might be a fantastic tester or an aspiring software quality pro, but how do you prove it? Certifications act like power-ups for your career. They:

✅ Validate Your Skills – Employers trust certifications as proof that you’ve mastered industry-recognized standards.
✅ Enhance Your Resume – Stand out in a competitive job market with credentials that highlight your expertise.
✅ Boost Your Confidence – Knowing you have proven, industry-backed skills strengthens your professional credibility.

ISTQB® – The Global Standard in Software Testing Certifications

When it comes to software testing certification, there’s one name that rules them all: ISTQB® (International Software Testing Qualifications Board). Whether you’re a newbie or a seasoned tester, there’s a certification for you:

🎯 ISTQB® Certified Tester Foundation Level (CTFL) – The perfect starting point for your journey in software testing. Learn the fundamentals and build a strong foundation!
🚀 ISTQB® Agile Tester – Agile is everywhere, ISTQB® Certifications in Agile prove you can test like a pro in fast-paced development teams.

  • Agile Technical Tester (CT-ATT) – Perfect for Agile teams, covering technical skills like TDD and CI/CD.
  • Agile Test Leadership at Scale (CT-ATLaS) – Focuses on scaling Agile testing leadership across teams

🏆 ISTQB® Advanced & Expert Levels – Take your career to the next level with specializations in test automation, test management, security testing, and more!

🔍 ISTQB® Specialist Certifications – Broaden your expertise with targeted certifications in areas like Mobile Application Testing, Usability Testing, Performance Testing, and AI Testing!

The best part? These certifications are recognized worldwide, giving you an edge no matter where you work. With over 1 million exams taken across 130+ countries, ISTQB® certifications have become the industry benchmark for software testing excellence.

How Does ISTQB® Certification Benefit You?

Certifications are more than just a title. They increase your earning potential and improve your job security. In fact, studies show that certified professionals earn higher salaries compared to their non-certified counterparts. Plus, in a competitive job market, having a certification could be the deciding factor between you and another candidate.

Here’s how ISTQB® certification can supercharge your career:

  • Career Growth: Certifications open doors to promotions, leadership roles, and exciting job opportunities.
  • Industry Recognition: Demonstrate to hiring managers and peers that you are committed to continuous learning and excellence in your craft.
  • Networking Opportunities: Become part of an elite group of certified professionals and connect with industry experts.
  • Competitive Edge: Differentiate yourself from other testers who rely solely on experience.

Looking to add even more weight to your testing expertise? The A4Q Practical Tester certification, now officially endorsed by ISTQB®, is your go-to choice! Unlike traditional theory-based exams, this certification is all about hands-on experience—because
real-world problems demand real-world solutions.

🔹 Learn by Doing – Dive into practical scenarios, case studies, and hands-on exercises designed to hone your critical thinking skills.
🔹 Bridge the Gap – Take your theoretical knowledge and turn it into effective, real-world testing strategies.
🔹 Boost Your Employability – Employers value testers who don’t just know their craft but can also apply it under real-life conditions.

Pairing your ISTQB® CTFL certification with an ISTQB® Add-On Practical Tester certification makes you a well-rounded professional, proving you’ve got both the knowledge and the skills to back it up.

How to Get Certified? Meet iSQI – Your Global Exam Provider!

So, where do you go when you’re ready to get certified? That’s where iSQI comes in! As a global authorized exam provider, iSQI makes it easy for you to:

🖥 Take your exam online from the comfort of your home.
🌎 Access exams in multiple languages across different regions—because software quality is a global language.

Your Next Move? Get Certified & Stand Out!

If you want to level up in your career, professional certifications aren’t just an option—they’re a game-changer. So, whether you’re just starting out or aiming for that next promotion, getting certified is one of the smartest moves you can make.

Here’s what you can do next:
✅ Research the best ISTQB® certification for your career goals.
✅ Visit iSQI’s website to find out how to register for an exam.
✅ Start studying and preparing for your certification—many online resources, practice exams and our special ISTQB exam preparation platform are available to help you succeed.

✅ Take the exam and showcase your new achievement!
💡 Ready to take the leap? Check out iSQI’s certification options and start your journey today!

Your skills deserve recognition. Your career deserves growth. Your future deserves the best. Get certified, and unlock new opportunities today! Success is yours.

Author

iSQI Group

iSQI are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: software testing tools

How accessibility testing tools use AI to ship quality products faster

May 19, 2025 by Aishling Warde

Accessibility testing is essential for compliance with regulations such as the European Accessibility Act (EAA). The EAA becomes a national law in all 27 EU Member States on June 28, 2025, and businesses need to be prepared. While failure to meet this deadline can result in severe penalties, achieving compliance is ultimately about much more than just avoiding fines. It’s about expanding your market share, enhancing your brand reputation, and building high quality products for everyone, including people with disabilities.

This is why testing is so essential. By putting an effective and efficient testing approach in place, you can quickly identify and fix accessibility issues early and ensure you’re building the highest quality products for all people. The question is, how do you integrate comprehensive accessibility testing while maintaining velocity and keeping costs down?

It’s a challenging question. Fortunately, there’s a clear answer.

In this post, we’ll explore an approach called “shift left“, which refers to addressing accessibility issues earlier in the software development lifecycle—during development and QA—as opposed to later in production or after a product has been released, at which point the work becomes slower, costlier, and the risk of customers having a poor experience goes up exponentially. We’ll also examine how AI and automation can accelerate velocity while elevating quality.

The benefits of automated and AI-guided testing

Getting and staying compliant in a strategic and cost-effective way means prioritizing efficiency. It’s about doing the work early and accurately, avoiding re-work, and getting high-quality products out the door faster.

This is where advanced automation can have an outsize impact. By using automated and AI-guided testing, dev and QA teams can find and fix over 80% of conformance issues—without needing special accessibility knowledge!

The efficiency gains are immediate. Your teams can find more issues more quickly and address them earlier, saving both time and money, freeing them up to focus on more complex concerns, and consistently delivering the highest quality products.

Human-centric AI and automation in digital accessibility

As valuable and effective as AI and automated testing can be, human insight and expertise are still required. Automation doesn’t remove humans from the work; it enables humans to do their best work. And rather than replacing accessibility expertise, AI amplifies and scales it.

By leveraging what AI makes possible, we can empower dev and QA teams to accelerate velocity while maintaining quality. Recent updates from Deque, for example, introduce AI-driven capabilities that address the toughest accessibility challenges—increasing test coverage, reducing manual work, and making accessibility testing faster and easier than ever.

Saving time with tools for every part of the software development lifecycle

A comprehensive suite of accessibility testing tools that brings together automated testing and AI-guided testing can help your development and QA teams shift left and identify and fix accessibility issues early, with the highest levels of efficiency, and without the high false positive rates that hamper other solutions.

False positives—testing results that inaccurately flag issues that aren’t actually issues—waste your team’s time, and it’s why Deque is committed to zero false positives—because efficiency and accuracy matter.

It’s why our customers choose Deque and why developers and QA professionals prefer our tools. Because we help businesses become and stay accessible in the fastest and most cost-effective ways possible while delivering high-quality products and services for everyone. When it comes to digital accessibility, the proactive approach is the right approach.

Want to learn more? If you’re at EuroSTAR 2025, come see us about a free demo at Stand 34! You can also visit our website to request a free trial.

Author

Derrin Evers

Derrin Evers is a Senior Solution Consultant at Deque Europe. Derrin’s background and experience spans from design to development, small agencies to large enterprises, and public sector to private business from North America to Europe. With the professional goal to promote positive change within software development through digital accessibility, Derrin helps Deque customers discover, plan, and realize their potential through strategic and technical support across the software development lifecycle.

Deque are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Application Testing Tagged With: Test Automation

Principles Drive Trust in AI

May 14, 2025 by Aishling Warde

The pace that “artificial intelligence” (AI) is being incorporated into software testing products and services creates immense ethical and technological challenges for an IT industry that’s so far out in front of regulation, they don’t even seem to be playing the same sport.

It’s difficult to keep up with the shifting sands of AI in testing right now, as vendors search for a viable product to sell, and most testing clients I speak to these days haven’t begun incorporating an AI element to their test approach and frankly, the distorted signal coming from the testing business hasn’t helped. What I’m hearing from clients are big concerns around data privacy and security, transparency on models and good evidence, and the ethical issues of using AI in testing.

I’ve spent a good part of my public career in testing talking about risk, how to communicate it to leadership, and what good testing contributes to that process in helping identify threats to your business. So, I’m not here to tell you “No” to AI in testing but talk about how KPMG is trying to manage through the current mania and what we think are the big rocks we need to move to get there with care and at pace.

KPMG AI Trusted Framework

As AI continues to transform the world in which we live – impacting many aspects of everyday life, business and society KPMG has taken the position to help organizations utilise the transformative power of AI, including its ethical and responsible use.

We’ve recognized that adopting AI can introduce complexity and risks that should be addressed clearly and responsibly. We are also committed to upholding ethical standards for AI solutions that align with our values and professional standards, and that foster the trust of people, communities, and regulators.

In order to achieve this, we’ve developed the KPMG Trusted AI model as our strategic approach and framework to designing, building, deploying and using AI strategies and solutions in a responsible and ethical manner so we can accelerate value with confidence.

As well, our approach to Trusted AI includes foundational principles that guide our aspirations in this space, demonstrating our commitment to using it responsibly and ethically:

Values-driven

We implement AI as guided by our Values. They are our differentiator and shape a culture that is open, inclusive and operates to the highest ethical standards. Our Values inform our day-to-day behaviours and help us navigate emerging opportunities and challenges.

Human-centric

We prioritize human impact as we deploy AI and recognize the needs of our clients and our people. We are embracing this technology to empower and augment human capabilities — to unleash creativity and improve productivity in a way that allows people to reimagine how they spend their days.

Trustworthy

We will adhere to our principles and the ethical pillars that guide how and why we use AI across its lifecycle. We will strive to ensure our data acquisition, governance and usage practices upholds ethical standards and complies with applicable privacy and data protection regulations, as well as any confidentiality requirements.

KPMG GenAI Testing Framework

The KPMG UK Quality Engineering and Testing practice has adopted the Trusted AI principles as an underpinning model for our work in AI and testing. We are focusing our initial GenAI Testing Framework on specific activities to extend the reach of testers while allowing risk management to be insight led and governance to be human centric. This is accomplished by through incorporating our principles into the architecture including:

Tester Centric Design

The web-hosted front-end is where testers can securely upload documents, manage prompts, and access AI generated test assets to use or modify. Testers can create and modify rules allowing consistent application and increased control of models and responses.

Transparent Orchestration

The orchestration layer sits at the heart of the system and manages the flow of data between different components to ensures seamless execution while providing transparency on the models being deployed.

Secure Services

The Knowledgebase contains the fundamental services powering the AI solution and storing input documents, test assets, and reporting data as well as domain and context specific information you design.

Software testing is essentially a function of risk management and integrating AI into your test approach presents multiple challenges for your test team as well as implications for programme governance. Model accuracy, intellectual property rights and IP leaks, data quality issues with accuracy, drift, or loss are all real internal risks to your operations to ensure you are testing the right things at the right time. Externally, your governance can run into copyright infringements or privacy violations, both which have implications for your brand let alone the potential harm done to vulnerable communities through model bias which makes using an ethical framework for designing and implementing AI in testing even more important.

There remains a great deal to be worked out regarding AI in software testing and we are just at the discovery phase of what it can – and should do for system quality. Whatever the future holds, your strategy has to be grounded in principles and values that reflect an ethical approach including putting the tester at the centre of process, transparency of models and data, and safety and security your primary objective.

Keith Klain

Keith Klain is a Director of Quality Engineering and Testing at KPMG UK and is frequent writer and speaker about the software testing industry.

He leads software quality, automation, process improvement, risk management, and digital transformation initiatives for retail banking and capital markets clients. With extensive international experience in software quality management and testing, he has built and managed teams for global financial services and consulting firms in the US, UK, and Asia Pacific.

He is passionate about increasing the value of technology by aligning test strategies to business objectives through process efficiency, effective reporting, and better software testing. He is also an advocate for workforce development and social impact, having designed and delivered technology training curriculum for non-profits to create technology delivery centres in disadvantaged communities. He has served as the Executive Vice President of the Association for Software Testing and has received multiple awards and recognition for his contributions to the software testing profession and diversity in tech.

KPMG are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: EuroSTAR Conference

How to run any number of UI tests on each PR

May 12, 2025 by Aishling Warde

If you are reading this article, likely, you’ve already recognized the value of incorporating UI tests for every pull request in your development process. In short, it’s the single way to be confident that your main branch is ready for release anytime. Releasing excellent and stable versions is crucial in the mobile world, where a user fully manages an app update process, unlike the backend world. If you are a newcomer to the topic of UI Tests for Android, please explore my previous article or Alex Bykov’s talk, where you will find all details and explanations.

While beneficial, running UI tests on each PR introduces significant challenges for the underlying Infrastructure. As a result, almost every team that attempts to implement UI tests for each PR encounters difficulties, often making the same mistakes and investing considerable time and resources in the process. In this article, I will delve into the specific requirements of the Infrastructure needed for running UI tests on PRs and discuss the solutions available in the market. By comprehensively understanding the infrastructure challenges and available explanations, development teams can better navigate the complexities of implementing UI tests for each PR, ultimately saving time and resources.

UI Test Infrastructure

First, let’s define the term “UI Test Infrastructure.” “UI Test Infrastructure” is the thing that allows running the UI tests. On each PR. Any number of tests. From a user’s (Software Engineering and QA Teams) perspective, it looks like I send a command to execute a bunch of tests and receive a report. It’s all. It must be simple for those who use it. But it is complex for those who build and support the solution. So, our final goal is to build this Infrastructure somehow using internal or external solutions and resources.


Okay, now let’s refresh our memory about the entire picture of the UI Test process with some updates that appeared last time.

You see a lot of details in the puzzle. Now, have a look at where “UI Test Infrastructure” is presented.

“UI Test Infrastructure” covers an extensive set of various things. Writing and Backend parts stay on the User side because only the user (developer or SDET) can create tests now. Therefore, writing is out of the scope of this article. An excellent comparison of all writing tools is described in the articles Where to write Android UI tests (Part 1) and Where to write Android UI tests (Part 2) (except Maestro, which appeared after publishing). The backend stuff regarding testing against real or mock networks will be partly touched on later. But I need to mention that Mock Network at scale becomes the responsibility of UI Test Infrastructure too.

Requirements

Before delving into the complexities of building this infrastructure, I recommend to begin with a clear set of requirements and expectations. These will serve as a roadmap to guide you through the intricate process of constructing the infrastructure. Also, it is important to remember that the main users of UI Test Infrastructure are developers, SDETs (Software Development Engineers in Test), and QA. As such, our focus should be optimizing their working experience and ensuring a comfortable and efficient environment for these professionals.

Have a look at the image below.

Now, let’s consider point by point.

Supported platforms

In Android development, there are two primary platforms for creating UI tests:

  1. The native platform, where developers utilize tools exclusively provided by Google, such as Espresso and UI Automator. Solutions like Kaspresso, Barista and etc., are built on top of Espresso and UI Automator.
  2. Appium, an open-source, cross-platform testing framework.

Interface

Next, a fundamental expectation is the ease of integration with existing CI/CD systems through plugins and the ability to utilize the infrastructure from the command line interface (CLI). The plugin and CLI should offer at least the flexibility to filter tests for execution and select the desired devices.

On top of this basic functionality, different verification modes should be supported, i.e. fast runs vs verifying a fix for flakiness. More details about these terms will be provided later.

Reports

At the end of a run, a user expects to see reports that contain at least the following information:

  • the final result: passed or failed
  • number of executed, successful, failed and ignored tests
  • information about failed tests like stack trace, device logs, and video
  • some analytics, like the percentage of failed tests, including retries

Stability

Stability is a comprehensive term encompassing various aspects of UI testing. A test may be unstable (or flaky) due to numerous reasons, such as poorly written tests, an unstable backend or an unreliable network if the test depends on a real backend, feature flags, an improperly set up or unstable device (e.g., a Google service updating or unturned-off animation), framework instability (Espresso, UI Automator, and Appium are known to have their quirks), or internal issues with the test infrastructure caused by factors such as high load or crashes in one of the internal services.

Have a glance at the image below to summarize the possible reasons for failures:

When selecting an infrastructure, we expect full stability or, at the very least, quick recovery that does not impact the overall results and time. The UI Test Infrastructure should cover areas “Where to run”, “Running”, and “Hardware infrastructure”.

Over time, the number of UI tests tends to grow, and with it, the emergence of flaky tests. A flaky test is a test that works most of the time correctly but occasionally fails due to peculiar reasons. Unfortunately, flaky tests are an inescapable reality in UI testing. While it is crucial to investigate the causes, it is not great to block a pull request due to a single occasionally failed UI test. Therefore, as a user of UI test infrastructure, I expect a straightforward and integrated retry mechanism to be available.

Time and Scalability

Test suite execution time is a critical factor for all UI test infrastructures. To better understand this, let’s first examine the elements that influence execution time:

These factors can be divided into two groups: those that depend on the user’s tests and those related to the infrastructure.

Various strategies can be employed to reduce test execution time. One widely-used approach is to focus on specific functionality within a single test by mocking the backend and avoiding repetitive actions such as logging in.

Regarding infrastructure, the test execution algorithm plays a vital role in determining the time required. For instance, consider a test run with a suboptimal batching strategy:

Alternatively, examine a test run with a non-optimal retry policy:

In my recent study, which included over 30 interviews with various development teams, I found that most teams are willing to wait 15 minutes for a UI test run on a pull request. This suggests that even with numerous developers and UI tests, not optimally written tests, concurrent PR runs, or flaky tests, all PRs should be completed within this 15-minute window.

However, it’s a common scenario that PR waiting times can extend from 15 minutes to several hours when the infrastructure is under heavy load, often resulting in rejections due to timeouts. This highlights the importance of optimizing the system to handle such situations efficiently. That’s why I’ve included “Scalability” in the title.

Security

Security protects the test environment, data, and application from unauthorized access, tampering, or malicious activities.

Some critical aspects of security in an Android UI testing infrastructure include:

  • Authentication and authorization: Ensure only authorized users can access the testing environment, data, and resources.
  • Data protection: Safeguard sensitive information, including test data and user credentials, using encryption in transit and at rest. Implement proper data storage and disposal practices to prevent data leaks.
  • Network security: Secure the communication between testing devices, servers, and other infrastructure components.

Cost

I would emphasize the two main aspects:

  • The price model and utilization of paid resources. Generally, there are the following price models: pay for a parallel and pay for a minute. However, choosing and using the appropriate price model wisely is a separate big topic.
  • The second aspect is the internal UI Test Infrastructure algorithms and solutions that allow spending less money running the same test suite. One of the possible optimizations is described above (better batching and handling of flaky tests).

You can find more details about cost price models and specific examples in the article titled “I Want to Run Any Number of Android UI Tests on Each PR: Cost, Part II.”

Support

Last on this list, but certainly not least in terms of importance, is Support. Many teams prioritize not just the service itself but also the quality of support provided. Factors such as prompt responses, a willingness to help, and the ability to save time for the team are highly valued. Additionally, prioritizing features based on client needs and preferences further enhances the overall support experience. Open-sourcing portions of Infrastructure is often highly appreciated by clients, as it fosters trust in the solutions provided and enables them to better understand the underlying mechanics.

Available Cloud Solutions on the market

As an engineer, you have the option to create your own infrastructure that meets all of the above requirements. However, it’s clear that this task can be challenging and complex. Therefore, let’s explore cloud solutions that offer ready-made options.

There are the following Cloud solutions:

  • Marathon Cloud
  • Firebase Test Lab
  • BrowserStack
  • emulator.wtf
  • SauceLabs
  • AWS Device Farm
  • Perfecto Mobile
  • LambdaTest

Please review the articles below, where I have thoroughly examined each solution based on the aforementioned requirements.

  • I want to run any number of Android UI tests on each PR. Existing solutions (BrowserStack, Firebase Test Lab). Part III
  • I want to run any number of Android UI tests on each PR. Existing solutions (SauceLabs, AWS Device Farm, LambdaTest, Perfecto Mobile). Part IV
  • I want to run any number of Android UI tests on each PR. Existing solutions. Part V

Conclusion

In this article, I examined the concept of UI Test Infrastructure and described the essential criteria for selecting or constructing it. As you might have observed, developing an Infrastructure that fulfills all of the aforementioned requirements and addresses potential issues can be quite complex and challenging. Therefore, I emphasized the existing Cloud Solutions that provide ready-made alternatives.

Author

Prashant Mohan

Evgenii Matsiuk, co-founder at MarathonLabs, co-author of Kaspresso, Android Google Developer Expert.

Testwise are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Test Automation Tagged With: software testing conference

How Your Team Can Achieve Sustainable Test Growth: Balancing Speed, Cost, and Quality in the AI Era

May 7, 2025 by Aishling Warde

The promise of AI-driven development is undeniable – faster code, quicker releases, and unprecedented innovation. But here’s the catch: AI isn’t perfect, and the code it generates could be riddled with hidden flaws. In fact, within three years, over a third of all code will be AI-generated, and much of it may introduce more bugs into production than ever before.

Digital transformation isn’t just a buzzword anymore – it’s a $3.9 trillion race to stay competitive, with 85% of organizations adopting cloud-first strategies. As release cycles accelerate and budgets tighten, how do you ensure quality doesn’t fall by the wayside?

For years, the rule of thumb has been “pick two – speed, cost, or quality.” Now, that luxury is gone. In this blog, we’ll dive into the growing pressure to balance all three, and why outdated testing processes could make or break your transformation efforts.

Testing Bottlenecks in the Era of Digital Transformation

Despite advancements in test automation, testing remains one of the biggest bottlenecks to digital transformation. Surprisingly, 80% of tests are still conducted manually across the industry. While automation promises greater efficiency, many test automation projects are started but never completed, and their ROI often falls short.

We recently surveyed SmartBear customers who do not use automation tools. The three most common barriers to automation adoption were:

  1. Lack of Time – Teams prioritize releasing the next version, leaving little time to develop automated tests. Automation efforts consistently lag, typically falling two sprints behind development.
  2. Lack of Expertise – Automation tools often require technical skills that teams may not possess. Record-and-playback solutions have failed to meet expectations, leading many teams to abandon automation altogether.
  3. Tool Overload – With hundreds of automation tools available, selecting the right one is overwhelming. Many teams revert to manual testing simply because it’s easier than navigating the complex tool landscape.

These challenges create friction and prevent teams from scaling their testing processes, slowing down release cycles and increasing the risk of bugs in production.

The High Cost of Delayed Bug Detection

The cost of bugs discovered in production far exceeds those caught earlier in development. A striking example is the recent CrowdStrike issue, which resulted in $5.4 billion in losses due to widespread system failures. The actual fix took only an hour and a half, but the repercussions were far-reaching.

On a broader scale, the numbers are staggering. Each year, 100 billion lines of code are added to software systems, with an estimated 25 bugs per thousand lines. This results in roughly 2.5 billion bugs leaking into production annually. The cost to fix these issues post-release is exponentially higher than addressing them during development.

Strategies for Sustainable Test Growth

To address these challenges, organizations must adopt a sustainable approach to testing – one that pushes defect detection earlier in the process (shift left) while improving monitoring and feedback in production environments (shift right).

Shift Left – Catching Bugs Early

The earlier a bug is found, the cheaper it is to fix. Shift left practices encourage testing earlier in the development lifecycle, reducing the risk of costly production issues. However, developers cannot be expected to take on all testing responsibilities. While developers are doing more testing than ever, end-to-end and UI testing require specialized skills. Overburdening developers with testing tasks detracts from their primary focus – writing application code.

Shift Right – Monitoring Production for Faster Feedback

By extending testing into production, teams can monitor for errors, track performance, and gather valuable insights to refine pre-production testing. Effective shift-right strategies rely on robust production monitoring systems that capture issues in real time and relay information back to development teams. This feedback loop ensures continuous improvement, reducing the cost and complexity of addressing bugs discovered in the field.

Tying It All Together

Combining these strategies creates a continuous quality loop that not only reduces the number of bugs slipping into production but also significantly lowers the cost of fixing them. By catching defects earlier and refining tests through production insights, businesses can avoid the ballooning costs associated with late-stage bug fixes. This holistic approach improves release velocity, enhances software reliability, and ultimately delivers a higher return on investment (ROI) by preventing revenue loss caused by critical failures.

Sustainable test growth isn’t just about preventing issues – it’s about driving long-term savings and maximizing the value of every development hour spent.

The SmartBear Approach to Testing

At SmartBear, we understand the delicate balance between speed, cost, and quality. Our holistic testing strategy focuses on continuous quality at every stage of development. By leveraging SmartBear API Hub, Test Hub, and Insight Hub, teams gain end-to-end visibility across the software development lifecycle, ensuring they can build, test, and release with confidence.

The Test Hub allows teams to manage, automate, and execute a variety of tests – from functional and UI tests to API and load tests – all within a single platform. This centralized approach streamlines workflows and reduces the overhead associated with managing multiple testing tools.

AI-Powered Enhancements for Modern Testing

SmartBear’s roadmap is filled with AI-driven features designed to accelerate test growth and simplify automation. Some of the latest innovations include:

  • Natural Language-Based UI Test Automation – Convert manual tests into automated scripts for web and mobile apps using simple natural language prompts, reducing the need for technical expertise.
  • Test Case Generation from Requirements – Instantly generate manual test cases directly from user stories and requirements, speeding up test creation and ensuring coverage aligns with business needs.
  • Test Data Generation – Create synthetic test data on demand through contextual prompts, eliminating the delays associated with test environment setup.
  • Visual Testing – Detect visual defects across web applications at scale, ensuring consistent performance across browsers and devices.
  • Contract Test Generation – Produce contract tests directly from OpenAPI specs, client code, or HTTP request/response pairs, ensuring robust API coverage.

By embedding AI throughout the testing process, SmartBear empowers teams to automate faster, identify defects earlier, and minimize production risks without overburdening development teams. These AI-driven capabilities are already delivering tangible results for organizations:

  • “Previously, locator-based plug-ins required painful updates as programs evolved. Zephyr Scale’s AI automation eliminates that issue, interpreting commands like ‘click on magnifying glass,’ cutting regression time from 90 to 20 minutes, improving consistency, increasing coverage, and saving time and money.” — Test Analyst at a Leading Automotive Services Provider
  • “Adopting no-code automation cut our manual regression time by about 60%, allowing QA to focus on complex scenarios. Non-technical team members now create tests aligned with business goals, increasing coverage, enhancing collaboration, reducing post-release defects, and fostering greater ownership.” — Quality Assurance Analyst at a Global Software Company

Future-Proofing Software Quality in the AI Era

As AI continues to reshape the software development landscape, organizations stand at a critical crossroads. The potential for faster development is undeniable, but without the right testing strategies in place, the influx of AI-generated code could unravel hard-won gains. Sustainable test growth isn’t just a technical goal – it’s a business necessity for navigating the complexities of digital transformation.

Shifting left to catch bugs early, embedding robust production monitoring, and integrating AI-driven automation can help businesses break free from the outdated “pick two” mentality. The organizations that succeed in balancing speed, cost, and quality will lead the next wave of innovation. Those that don’t risk falling behind grappling with costly production bugs, delayed releases, and customer dissatisfaction.

SmartBear Hubs provide the framework to streamline testing across the entire development lifecycle, enabling teams to release with confidence, minimize risk, and scale at the pace digital transformation demands. But the time to act is now.

If you’re ready to stop firefighting production issues and start building a proactive, AI-empowered testing strategy, SmartBear can help. Get in touch today and discover how our end-to-end solutions can future-proof your development pipeline and deliver sustainable test growth.

Author

Prashant Mohan

Prashant Mohan is a VP of Product Management at SmartBear. He is responsible for driving the vision and strategy of products that help developers and testers deliver quality applications at scale. Prashant is an engineer with a business degree, and has worked across several industries including B2B tech, Fintech and HealthIT.

SmartBear are Gold Sponsors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: 2025, EuroSTAR Conference

  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us