• Skip to main content
Year End Offer: Save 30% + Teams Save Up to an Extra 25%!

EuroSTAR Conference

Europe's Largest Quality Engineering Conference

  • Programme
    • Programme Committee
    • 2025 Programme
    • Community Hub
    • Awards
    • Social Events
    • Volunteer
  • Attend
    • Location
    • Highlights
    • Get Approval
    • Why Attend
    • Bring your Team
    • Testimonials
    • 2025 Photos
  • Sponsor
    • Sponsor Opportunities
    • Sponsor Testimonials
  • About
    • About Us
    • Our Timeline
    • FAQ
    • Blog
    • Organisations
    • Contact Us
  • Book Now

Quality Assurance

Level Up Your Career: Why Professional Certifications in Software Quality Matter

May 21, 2025 by Aishling Warde

šŸŽ® Imagine you’ve put in the work—conducting rigorous testing, analyzing systems, and ensuring exceptional software quality. But in a competitive industry, expertise alone won’t always be enough to set you apart. Employers and clients seek tangible proof of your skills, and that’s where professional certifications make the difference. They validate your expertise, strengthen your credibility, and unlock new career opportunities. It’s not just about what you know—it’s about demonstrating your mastery to the world.

Why Certifications? Because Expertise Deserves Recognition!

You might be a fantastic tester or an aspiring software quality pro, but how do you prove it? Certifications act like power-ups for your career. They:

āœ… Validate Your Skills – Employers trust certifications as proof that you’ve mastered industry-recognized standards.
āœ… Enhance Your Resume – Stand out in a competitive job market with credentials that highlight your expertise.
āœ… Boost Your Confidence – Knowing you have proven, industry-backed skills strengthens your professional credibility.

ISTQBĀ® – The Global Standard in Software Testing Certifications

When it comes to software testing certification, there’s one name that rules them all: ISTQBĀ® (International Software Testing Qualifications Board). Whether you’re a newbie or a seasoned tester, there’s a certification for you:

šŸŽÆ ISTQBĀ® Certified Tester Foundation Level (CTFL) – The perfect starting point for your journey in software testing. Learn the fundamentals and build a strong foundation!
šŸš€ ISTQBĀ® Agile Tester – Agile is everywhere, ISTQBĀ® Certifications in Agile prove you can test like a pro in fast-paced development teams.

  • Agile Technical Tester (CT-ATT) – Perfect for Agile teams, covering technical skills like TDD and CI/CD.
  • Agile Test Leadership at Scale (CT-ATLaS) – Focuses on scaling Agile testing leadership across teams

šŸ† ISTQBĀ® Advanced & Expert Levels – Take your career to the next level with specializations in test automation, test management, security testing, and more!

šŸ” ISTQBĀ® Specialist Certifications – Broaden your expertise with targeted certifications in areas like Mobile Application Testing, Usability Testing, Performance Testing, and AI Testing!

The best part? These certifications are recognized worldwide, giving you an edge no matter where you work. With over 1 million exams taken across 130+ countries, ISTQBĀ® certifications have become the industry benchmark for software testing excellence.

How Does ISTQBĀ® Certification Benefit You?

Certifications are more than just a title. They increase your earning potential and improve your job security. In fact, studies show that certified professionals earn higher salaries compared to their non-certified counterparts. Plus, in a competitive job market, having a certification could be the deciding factor between you and another candidate.

Here’s how ISTQBĀ® certification can supercharge your career:

  • Career Growth: Certifications open doors to promotions, leadership roles, and exciting job opportunities.
  • Industry Recognition: Demonstrate to hiring managers and peers that you are committed to continuous learning and excellence in your craft.
  • Networking Opportunities: Become part of an elite group of certified professionals and connect with industry experts.
  • Competitive Edge: Differentiate yourself from other testers who rely solely on experience.

Looking to add even more weight to your testing expertise? The A4Q Practical Tester certification, now officially endorsed by ISTQBĀ®, is your go-to choice! Unlike traditional theory-based exams, this certification is all about hands-on experience—because
real-world problems demand real-world solutions.

šŸ”¹ Learn by Doing – Dive into practical scenarios, case studies, and hands-on exercises designed to hone your critical thinking skills.
šŸ”¹ Bridge the Gap – Take your theoretical knowledge and turn it into effective, real-world testing strategies.
šŸ”¹ Boost Your Employability – Employers value testers who don’t just know their craft but can also apply it under real-life conditions.

Pairing your ISTQBĀ® CTFL certification with an ISTQBĀ® Add-On Practical Tester certification makes you a well-rounded professional, proving you’ve got both the knowledge and the skills to back it up.

How to Get Certified? Meet iSQI – Your Global Exam Provider!

So, where do you go when you’re ready to get certified? That’s where iSQI comes in! As a global authorized exam provider, iSQI makes it easy for you to:

šŸ–„ Take your exam online from the comfort of your home.
šŸŒŽ Access exams in multiple languages across different regions—because software quality is a global language.

Your Next Move? Get Certified & Stand Out!

If you want to level up in your career, professional certifications aren’t just an option—they’re a game-changer. So, whether you’re just starting out or aiming for that next promotion, getting certified is one of the smartest moves you can make.

Here’s what you can do next:
āœ… Research the best ISTQBĀ® certification for your career goals.
āœ… Visit iSQI’s website to find out how to register for an exam.
āœ… Start studying and preparing for your certification—many online resources, practice exams and our special ISTQB exam preparation platform are available to help you succeed.

āœ… Take the exam and showcase your new achievement!
šŸ’” Ready to take the leap? Check out iSQI’s certification options and start your journey today!

Your skills deserve recognition. Your career deserves growth. Your future deserves the best. Get certified, and unlock new opportunities today! Success is yours.

Author

iSQI Group

iSQI are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: software testing tools

Principles Drive Trust in AI

May 14, 2025 by Aishling Warde

The pace that ā€œartificial intelligenceā€ (AI) is being incorporated into software testing products and services creates immense ethical and technological challenges for an IT industry that’s so far out in front of regulation, they don’t even seem to be playing the same sport.

It’s difficult to keep up with the shifting sands of AI in testing right now, as vendors search for a viable product to sell, and most testing clients I speak to these days haven’t begun incorporating an AI element to their test approach and frankly, the distorted signal coming from the testing business hasn’t helped. What I’m hearing from clients are big concerns around data privacy and security, transparency on models and good evidence, and the ethical issues of using AI in testing.

I’ve spent a good part of my public career in testing talking about risk, how to communicate it to leadership, and what good testing contributes to that process in helping identify threats to your business. So, I’m not here to tell you ā€œNoā€ to AI in testing but talk about how KPMG is trying to manage through the current mania and what we think are the big rocks we need to move to get there with care and at pace.

KPMG AI Trusted Framework

As AI continues to transform the world in which we live – impacting many aspects of everyday life, business and society KPMG has taken the position to help organizations utilise the transformative power of AI, including its ethical and responsible use.

We’ve recognized that adopting AI can introduce complexity and risks that should be addressed clearly and responsibly. We are also committed to upholding ethical standards for AI solutions that align with our values and professional standards, and that foster the trust of people, communities, and regulators.

In order to achieve this, we’ve developed the KPMG Trusted AI model as our strategic approach and framework to designing, building, deploying and using AI strategies and solutions in a responsible and ethical manner so we can accelerate value with confidence.

As well, our approach to Trusted AI includes foundational principles that guide our aspirations in this space, demonstrating our commitment to using it responsibly and ethically:

Values-driven

We implement AI as guided by our Values. They are our differentiator and shape a culture that is open, inclusive and operates to the highest ethical standards. Our Values inform our day-to-day behaviours and help us navigate emerging opportunities and challenges.

Human-centric

We prioritize human impact as we deploy AI and recognize the needs of our clients and our people. We are embracing this technology to empower and augment human capabilities — to unleash creativity and improve productivity in a way that allows people to reimagine how they spend their days.

Trustworthy

We will adhere to our principles and the ethical pillars that guide how and why we use AI across its lifecycle. We will strive to ensure our data acquisition, governance and usage practices upholds ethical standards and complies with applicable privacy and data protection regulations, as well as any confidentiality requirements.

KPMG GenAI Testing Framework

The KPMG UK Quality Engineering and Testing practice has adopted the Trusted AI principles as an underpinning model for our work in AI and testing. We are focusing our initial GenAI Testing Framework on specific activities to extend the reach of testers while allowing risk management to be insight led and governance to be human centric. This is accomplished by through incorporating our principles into the architecture including:

Tester Centric Design

The web-hosted front-end is where testers can securely upload documents, manage prompts, and access AI generated test assets to use or modify. Testers can create and modify rules allowing consistent application and increased control of models and responses.

Transparent Orchestration

The orchestration layer sits at the heart of the system and manages the flow of data between different components to ensures seamless execution while providing transparency on the models being deployed.

Secure Services

The Knowledgebase contains the fundamental services powering the AI solution and storing input documents, test assets, and reporting data as well as domain and context specific information you design.

Software testing is essentially a function of risk management and integrating AI into your test approach presents multiple challenges for your test team as well as implications for programme governance. Model accuracy, intellectual property rights and IP leaks, data quality issues with accuracy, drift, or loss are all real internal risks to your operations to ensure you are testing the right things at the right time. Externally, your governance can run into copyright infringements or privacy violations, both which have implications for your brand let alone the potential harm done to vulnerable communities through model bias which makes using an ethical framework for designing and implementing AI in testing even more important.

There remains a great deal to be worked out regarding AI in software testing and we are just at the discovery phase of what it can – and should do for system quality. Whatever the future holds, your strategy has to be grounded in principles and values that reflect an ethical approach including putting the tester at the centre of process, transparency of models and data, and safety and security your primary objective.

Keith Klain

Keith Klain is a Director of Quality Engineering and Testing at KPMG UK and is frequent writer and speaker about the software testing industry.

He leads software quality, automation, process improvement, risk management, and digital transformation initiatives for retail banking and capital markets clients. With extensive international experience in software quality management and testing, he has built and managed teams for global financial services and consulting firms in the US, UK, and Asia Pacific.

He is passionate about increasing the value of technology by aligning test strategies to business objectives through process efficiency, effective reporting, and better software testing. He is also an advocate for workforce development and social impact, having designed and delivered technology training curriculum for non-profits to create technology delivery centres in disadvantaged communities. He has served as the Executive Vice President of the Association for Software Testing and has received multiple awards and recognition for his contributions to the software testing profession and diversity in tech.

KPMG are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: EuroSTAR Conference

How Your Team Can Achieve Sustainable Test Growth: Balancing Speed, Cost, and Quality in the AI Era

May 7, 2025 by Aishling Warde

The promise of AI-driven development is undeniable – faster code, quicker releases, and unprecedented innovation. But here’s the catch: AI isn’t perfect, and the code it generates could be riddled with hidden flaws. In fact, within three years, over a third of all code will be AI-generated, and much of it may introduce more bugs into production than ever before.

Digital transformation isn’t just a buzzword anymore – it’s a $3.9 trillion race to stay competitive, with 85% of organizations adopting cloud-first strategies. As release cycles accelerate and budgets tighten, how do you ensure quality doesn’t fall by the wayside?

For years, the rule of thumb has been ā€œpick two – speed, cost, or quality.ā€ Now, that luxury is gone. In this blog, we’ll dive into the growing pressure to balance all three, and why outdated testing processes could make or break your transformation efforts.

Testing Bottlenecks in the Era of Digital Transformation

Despite advancements in test automation, testing remains one of the biggest bottlenecks to digital transformation. Surprisingly, 80% of tests are still conducted manually across the industry. While automation promises greater efficiency, many test automation projects are started but never completed, and their ROI often falls short.

We recently surveyed SmartBear customers who do not use automation tools. The three most common barriers to automation adoption were:

  1. Lack of Time – Teams prioritize releasing the next version, leaving little time to develop automated tests. Automation efforts consistently lag, typically falling two sprints behind development.
  2. Lack of Expertise – Automation tools often require technical skills that teams may not possess. Record-and-playback solutions have failed to meet expectations, leading many teams to abandon automation altogether.
  3. Tool Overload – With hundreds of automation tools available, selecting the right one is overwhelming. Many teams revert to manual testing simply because it’s easier than navigating the complex tool landscape.

These challenges create friction and prevent teams from scaling their testing processes, slowing down release cycles and increasing the risk of bugs in production.

The High Cost of Delayed Bug Detection

The cost of bugs discovered in production far exceeds those caught earlier in development. A striking example is the recent CrowdStrike issue, which resulted in $5.4 billion in losses due to widespread system failures. The actual fix took only an hour and a half, but the repercussions were far-reaching.

On a broader scale, the numbers are staggering. Each year, 100 billion lines of code are added to software systems, with an estimated 25 bugs per thousand lines. This results in roughly 2.5 billion bugs leaking into production annually. The cost to fix these issues post-release is exponentially higher than addressing them during development.

Strategies for Sustainable Test Growth

To address these challenges, organizations must adopt a sustainable approach to testing – one that pushes defect detection earlier in the process (shift left) while improving monitoring and feedback in production environments (shift right).

Shift Left – Catching Bugs Early

The earlier a bug is found, the cheaper it is to fix. Shift left practices encourage testing earlier in the development lifecycle, reducing the risk of costly production issues. However, developers cannot be expected to take on all testing responsibilities. While developers are doing more testing than ever, end-to-end and UI testing require specialized skills. Overburdening developers with testing tasks detracts from their primary focus – writing application code.

Shift Right – Monitoring Production for Faster Feedback

By extending testing into production, teams can monitor for errors, track performance, and gather valuable insights to refine pre-production testing. Effective shift-right strategies rely on robust production monitoring systems that capture issues in real time and relay information back to development teams. This feedback loop ensures continuous improvement, reducing the cost and complexity of addressing bugs discovered in the field.

Tying It All Together

Combining these strategies creates a continuous quality loop that not only reduces the number of bugs slipping into production but also significantly lowers the cost of fixing them. By catching defects earlier and refining tests through production insights, businesses can avoid the ballooning costs associated with late-stage bug fixes. This holistic approach improves release velocity, enhances software reliability, and ultimately delivers a higher return on investment (ROI) by preventing revenue loss caused by critical failures.

Sustainable test growth isn’t just about preventing issues – it’s about driving long-term savings and maximizing the value of every development hour spent.

The SmartBear Approach to Testing

At SmartBear, we understand the delicate balance between speed, cost, and quality. Our holistic testing strategy focuses on continuous quality at every stage of development. By leveraging SmartBear API Hub, Test Hub, and Insight Hub, teams gain end-to-end visibility across the software development lifecycle, ensuring they can build, test, and release with confidence.

The Test Hub allows teams to manage, automate, and execute a variety of tests – from functional and UI tests to API and load tests – all within a single platform. This centralized approach streamlines workflows and reduces the overhead associated with managing multiple testing tools.

AI-Powered Enhancements for Modern Testing

SmartBear’s roadmap is filled with AI-driven features designed to accelerate test growth and simplify automation. Some of the latest innovations include:

  • Natural Language-Based UI Test Automation – Convert manual tests into automated scripts for web and mobile apps using simple natural language prompts, reducing the need for technical expertise.
  • Test Case Generation from Requirements – Instantly generate manual test cases directly from user stories and requirements, speeding up test creation and ensuring coverage aligns with business needs.
  • Test Data Generation – Create synthetic test data on demand through contextual prompts, eliminating the delays associated with test environment setup.
  • Visual Testing – Detect visual defects across web applications at scale, ensuring consistent performance across browsers and devices.
  • Contract Test Generation – Produce contract tests directly from OpenAPI specs, client code, or HTTP request/response pairs, ensuring robust API coverage.

By embedding AI throughout the testing process, SmartBear empowers teams to automate faster, identify defects earlier, and minimize production risks without overburdening development teams. These AI-driven capabilities are already delivering tangible results for organizations:

  • ā€œPreviously, locator-based plug-ins required painful updates as programs evolved. Zephyr Scale’s AI automation eliminates that issue, interpreting commands like ā€˜click on magnifying glass,’ cutting regression time from 90 to 20 minutes, improving consistency, increasing coverage, and saving time and money.ā€ — Test Analyst at a Leading Automotive Services Provider
  • ā€œAdopting no-code automation cut our manual regression time by about 60%, allowing QA to focus on complex scenarios. Non-technical team members now create tests aligned with business goals, increasing coverage, enhancing collaboration, reducing post-release defects, and fostering greater ownership.ā€ — Quality Assurance Analyst at a Global Software Company

Future-Proofing Software Quality in the AI Era

As AI continues to reshape the software development landscape, organizations stand at a critical crossroads. The potential for faster development is undeniable, but without the right testing strategies in place, the influx of AI-generated code could unravel hard-won gains. Sustainable test growth isn’t just a technical goal – it’s a business necessity for navigating the complexities of digital transformation.

Shifting left to catch bugs early, embedding robust production monitoring, and integrating AI-driven automation can help businesses break free from the outdated ā€œpick twoā€ mentality. The organizations that succeed in balancing speed, cost, and quality will lead the next wave of innovation. Those that don’t risk falling behind grappling with costly production bugs, delayed releases, and customer dissatisfaction.

SmartBear Hubs provide the framework to streamline testing across the entire development lifecycle, enabling teams to release with confidence, minimize risk, and scale at the pace digital transformation demands. But the time to act is now.

If you’re ready to stop firefighting production issues and start building a proactive, AI-empowered testing strategy, SmartBear can help. Get in touch today and discover how our end-to-end solutions can future-proof your development pipeline and deliver sustainable test growth.

Author

Prashant Mohan

Prashant Mohan is a VP of Product Management at SmartBear. He is responsible for driving the vision and strategy of products that help developers and testers deliver quality applications at scale. Prashant is an engineer with a business degree, and has worked across several industries including B2B tech, Fintech and HealthIT.

SmartBear are Gold Sponsors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: 2025, EuroSTAR Conference

The Hidden Crisis in Software Quality: Why Unit Tests Aren’t Enough (And What We’re Learning From 100+ Companies)

April 30, 2025 by Aishling Warde

Traditional quality assurance is failing. Despite companies investing millions in testing infrastructure and achieving impressive unit test coverage – often exceeding 90% – critical production issues persist. Why? We’ve been solving the wrong problem.

The Evolution of a Crisis

Ten years ago, unit testing seemed like the silver bullet. Companies built extensive test suites, hired specialized QA teams, and celebrated high coverage metrics. With tools like GitHub Copilot, achieving near-100% unit test coverage is easier than ever. Yet paradoxically, as test coverage increased, so did production incidents.

The Real-World Testing Gap

Here’s what we discovered at SaolaAI after analyzing over 100 companies’ testing practices:

  1. Unit tests create a false sense of security. Teams mock dependencies and test isolated functions, but real-world failures occur at system boundaries.
  2. Microservice architectures exponentially increase complexity. A single user action might traverse 20+ services, creating millions of potential failure combinations.
  3. The “No QA” movement, while promoting developer ownership, has inadvertently reduced comprehensive testing.

The E2E Testing Paradox

End-to-end testing is essential for verifying that complex systems function seamlessly, yet companies struggle with major obstacles. Setting up E2E environments can take months, while maintaining test data often turns into a full-time job. Integrating these tests into CI/CD pipelines requires specialized expertise, adding another layer of complexity.

On the technical side, flakiness remains a persistent issue, with failure rates reaching 30-40%. Browser updates frequently break test suites, while asynchronous operations and timing inconsistencies introduce further instability. These challenges make E2E testing notoriously difficult to manage.

Beyond the technical barriers, cultural resistance slows adoption. Developers often see E2E testing as solely QA’s responsibility, while product teams prioritize feature development over test reliability. When test suites fail, they are frequently ignored or abandoned rather than fixed, leading to gaps in test coverage and overall software quality.

The AI-Driven Future

Fortunately, modern solutions are emerging that leverage AI to revolutionize testing: from automated test generation based on user behavior, self-healing tests that adapt to UI changes to Intelligent test selection to reduce runtime. The future with AI is bright and looks promising.

The Way Forward

Quality isn’t just about test coverage – it’s about understanding how systems behave in production:

  1. Shift from code coverage to interaction coverage
  2. Integrate observability with testing
  3. Use ML to predict failure scenarios
  4. Automate maintenance of test suites

For too long, we’ve treated quality as a coding problem. It’s time to recognize it as a data problem. By combining AI, machine learning, and traditional testing approaches, we can finally bridge the gap between unit test success and production reliability.

The next evolution in software quality isn’t about writing more tests- it’s about making testing intelligent enough to match the complexity of modern applications.

This is the challenge that inspired SaolaAI: making quality as sophisticated as the systems we’re building. The question isn’t whether AI will transform testing, but how quickly companies will adapt to this new paradigm.

Author

Arkady Fukzon

CEO and Co-Founder, SaolaAI

Saola are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: EuroSTAR Conference

Improving Test Planning by leveraging Quality Intelligence

June 3, 2024 by Lauren Payne

Loading the Elevenlabs Text to Speech AudioNative Player...

Test planning suffers due to poor requirements

Test planning is a critical part of an in-depth test strategy and includes the definition of the test objectives, scope, and the means and the schedule for achieving them, among other things.

Determining the extent of test coverage and prioritizing test cases are essential elements of a robust testing strategy, serving a crucial role in assuring the comprehensive validation of the software under test.

Unfortunately, testing teams face challenges due to ambiguous, changing, or incomplete requirements, making it difficult to establish a robust foundation for the test planning process.

This creates a cascading impact on the prioritization of testing efforts, resulting in resources being allocated to less critical test scenarios, potentially overlooking important issues, and ultimately leading to inadequate test coverage.

Approaches to address gaps in test coverage

As we discussed before, coverage gaps commonly occur when requirements are misunderstood, specifications are poorly defined or ambiguous and changes in the software are not appropriately incorporated into the test planning.

To tackle these challenges, test teams should embrace formal test design methods to guarantee comprehensive coverage of all aspects outlined in the requirements. Implementing a traceability matrix, linking requirements to test cases, further ensures comprehensive coverage.

Additionally, encouraging collaboration among development, testing, and business teams early in the process to clarify requirements helps mitigate the risks associated with poor requirements.

Yet, coverage gaps can arise when the requirements fail to adequately capture real-world user behaviors and preferences. Anticipating and comprehensively accounting for all user interactions and behaviors in written requirements proves to be a challenging task for product owners and business analysts.

Finding the needle in multiple disconnected tools

To boost test coverage and align the test prioritization with real-world usage, testing teams can analyze logs from both production and test environments, uncovering valuable insights and quality analytics.

Testing teams need to implement tools and processes to actively monitor, measure, and analyze user behavior when interacting with the live application. Additionally, it’s essential to observe how tests interact with the application during test runs to reveal disparities between how the application is used by real-world users and how it is tested.

In the market, tools like Google Analytics, Amplitude, SmartLook, Datadog, and others assist in collecting and analyzing telemetry from any environment. Designed with different purposes in mind, these tools serve various teams, such as Product and Marketing Analytics, Observability, and Application Performance Management. Despite their versatility, they may not be the optimal fit for testing purposes.

Considering this, a major challenge is that these tools aren’t designed to meet the specific needs of testing teams. This limits their ability to get the most out of the tools and stops them from seeing both how the software is used in the real world and during test runs and extract meaning from it.

Enhancing Test Planning with Quality Intelligence

Gravity is a unified platform designed to help testing teams monitor and leverage insights from both production and testing environments, enhancing the efficiency of the test strategy. It consolidates key data and insights into a single solution for easy access and analysis.

Its primary function is to produce ā€œQuality Intelligenceā€ by processing the ingested data through machine learning algorithms. This involves translating raw data into meaningful insights using techniques such as pattern recognition, trend and correlation analysis, anomaly and outlier detection, and more.

Gravity’s ability to monitor production and testing environments allows it to conduct a comprehensive test gap analysis. By comparing the paths taken by real user interactions in live production with the tests executed in testing environments, Gravity generates insights to enable testing teams to spot gaps in coverage, identify features that are either over-tested or under-tested, and recognize redundant testing efforts in less critical areas.

Gravity utilizes pattern recognition and AI (Artificial Intelligence) to automatically generate test cases for areas lacking test coverage, whether they are manual tests or automated scripts for test automation tools like Cypress, Playwright, and others. This feature not only reduces the burden of test case creation but also leads to a decrease in maintenance overhead.

Since it relies on real usage data collected from production environments, this enables data-driven test case prioritization, focusing test coverage on high-impact areas that directly affect the end user experience. By bridging assumptions from requirements with real-world usage insights, Gravity helps in optimizing test planning for improved efficiency and agility.

Conclusion

Understanding user behaviors in production not only elevates test coverage and prioritization by focusing on genuine user experiences but also acts as a powerful antidote to the limitations of the traditional requirement-based testing approaches.

It ensures that testing efforts are not confined to the rigid boundaries of documented requirements but rather extend to the dynamic and evolving landscape of user interactions, contributing to a more comprehensive and user-centric testing paradigm.

Gravity represents a remarkable advancement in the field of Quality Engineering, empowered by cutting-edge AI (Artificial Intelligence), with the aim of enabling testing teams to deliver higher-quality software products.

Author

Cristiano Caetano, Head Of Growth at Smartesting

Software testing authority with two decades of expertise in the field. Brazilian native who has called London home for the past six years. I am the proud founder of Zephyr Scale, the leading Test Management application in the Atlassian ecosystem. Over the last ten years, my role has been pivotal in guiding testing companies to build and launch innovative testing tools into the market. Currently, I hold the position of Head of Growth at Smartesting, a testing company committed to the development of AI-Powered testing tools.

Smartesting is an exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Quality Assurance Tagged With: EuroSTAR Conference, Expo

Metrics In Quality Assurance: A Practical Starting Point

May 6, 2024 by Lauren Payne

Have you heard any of the following statements from within your team or anywhere else in your organization?

  • ā€œThe feedback loop is too long.ā€
  • ā€œI’m not sure what tests we’re running.ā€
  • ā€œI don’t know where our test results are.ā€
  • ā€œI don’t understand our test results.ā€

These kinds of questions typically mean that you’ve successfully adoptedĀ CI/CD ways of workingĀ within development, and automation is freeing up your time for further improvements. But how do you answer these questions before they become real issues and people start toĀ lose interest?

Luckily, the answer is within your reach! You need to define relevant metrics and make them visible to the whole organization, specifically your team.

What metrics should I have?

We get this question a lot. Unfortunately, the answer is the infamous ā€œit depends.ā€ It’s better to show something than nothing, so simply start somewhere.

Once your organization is capable of collecting, storing, and presenting data, you typically begin to realize what metrics are needed. ā€œWell, that’s not really helpful,ā€ you might be thinking. That’s why we want to present anĀ interesting articleĀ we came across. In it, the authors present the following metrics:

  1. User sentiment
  2. Defects found in production
  3. Test case coverage
  4. Defects across sprints
  5. Committed vs. delivered stories

When looking at these, we noticed some overlap withĀ DORA metrics.

Deployment frequency

This should correlate with high ā€œ(1) User sentiment.ā€ In fact, it’s a precondition before you can even observe it.

Lead time for changes

This tells you how quickly you can go from an idea all the way to production, which is the same as ā€œ(5) Committed vs. delivered stories.ā€

Change fail rate

This tells you how many defects you have found and how long it took you to fix them; in other words, ā€œ(3) Test case coverageā€ further enables you to analyze the root cause of your change fail rate.

ā€œ(4) Defects across sprintsā€ is a more fine-grained example of the general fail rate.

Time to restore services

This tells you how quickly you can resolve production incidents, which is the next question after you’ve found out ā€œ(2) Defects found in production.ā€

Given the overlap and the fact that DORA metrics have been proven to work, we consider these as good ones to start with.

Where to start?

Now that we’ve defined several reasonable metrics, how can we collect them?

At Eficode, we believe in automation and that the data in reports and dashboards should be as real-time as possible. So, a few years ago, we started a couple ofĀ open source projectsĀ to support these kinds of initiatives:

  • InfluxDB plugin for Jenkins
  • Oxygen

In our customer cases, Jenkins CI has been the most used CI/CD solution, and we’ve already had a successful proof-of-concept when doing metrics with an open source time-series database calledĀ InfluxDBĀ in combination with another open source tool,Ā Grafana, which is for building dashboards.

Using open source solutions might need a bit of elbow grease, but they are the cheapest option by virtue of being entirely free. This helps you get going faster—remember, you want to start seeing data so you can evolve your metrics further.

Example of setup:

How to proceed once we have data?

After we’ve set up the infrastructure to start gathering data and visualizing it, we typically create a few graphs to answer some of the most asked questions. For example, ā€œWhat is the pass ratio for the tests running in continuous integration (i.e., change fail rate or defects across the sprint as mentioned earlier)?ā€

The data comes directly from your CI/CD tool, so it’s as up-to-date as it can get. And if your data is visible to everyone, your team will have a better chance of comprehending the current situation.

The next step is to start thinking with your stakeholders about the product that you and your team are building. Not all data is as important to everyone. For example, managers want to see the overall pass ratio from the month period, whereas developers want the latest results and to know whether the environment is passing smoke tests.

Luckily, Grafana and other solutions support multiple dashboards. This way, it’s easy to visualize separateĀ metricsĀ for management, team leads, QA teams, etc.

We recommend the practice of providing essential data to each stakeholder while allowing the option to see all of the data when needed.

We’ve often seen that once you start showing current data, more ideas emerge about what should be tackled next. Most often, this leads teams to start making decisions based on facts rather than pulling reasons out of thin air.

Why not increase your knowledge further by learning aboutĀ building qualityĀ in your software?

Author


Joonas Jauhiainen,Ā DevOps Lead

Joonas is a DevOps lead with experience in telecom, banking, insurance, and manufacturing, among other industries. His hobbies include investigation of IT devices, developing games and other SW projects not to mention underwater rugby!Ā 

Eficode is an Exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Development, EuroSTAR Expo, Quality Assurance Tagged With: 2024, EuroSTAR Conference

Why Crowdtesting Should be an Imperative Pillar of Quality Assurance

August 2, 2023 by Lauren Payne

Thanks toĀ MSGĀ for providing us with this blog post.

Users are looking for products that inspire – or at least don’t bother them

Future generations – all of them digital natives – will no longer enter their business relationships as traditional customers. The changed demands and the constant transformation through digitalization are turning customers into users. But where no human interaction can create trust, dispel doubts, and answer questions, the product alone is in the spotlight and must have the ability to convince in a very short time and with a reduced attention span.

Attractive, easy to use and – best of all – with a higher range of functions.

Constantly available and nearly unlimited offerings are no longer disruptive but common standards. This applies to products, services, and public offerings at the same time. So, whatever your offer is, you must make sure, the users find it attractive, easy to use and with a suitable range of functions.

The users – not a homogeneous mass

Another challenge is to meet the different target groups and to create a digital infrastructure that covers their different needs equally. Those of Generation Y and Z, which have the purchase power and demand of the future expect modern forms of interaction, purchasing products and services fully digital. The future “everything is now” generation, which is no longer tied to long-term contracts and is used to getting whatever they are looking for on demand.

The competition among web-offerings, which compete without ties and with the promise of a “change of supplier in minutes”, meets this need. The time span to inspire or disturb new users is accordingly very short. Not at least because the tolerance for errors also decreases with the rising use of digital products. By now, most users have gained so much experience with apps and online products that they have a clear expectation of functions and usability. If these expectations are disappointing, they simply download the next app. And even if this is sometimes tied to opening an account, today this can be done quickly enough and with reasonable efforts.

The subjective experience counts

As good as product design and functionality may be, the product experience is and remains subjective. Every product will always create a subjective use case for the user, and this must work to store a positive experience.
A subjective use case could be that a user carries out his transactions exclusively while commuting on a mobile device and expects for instance a banking app to be compatible with his mobile device. The app should be so intuitive to use that external distractions do not disrupt the user flow and ideally the data flow should adequately handle the switch from 3G/4G mobile networks to WLAN networks. If all this fits, the experience is consistently positive.

This in turn not only brings the advantage that the individual user is satisfied, but providers also benefit from the fact that an experience is always communicated to others.

Position yourself on the market through assured quality

By assessing the product quality, you may influence your positioning on the market towards an outstanding product experience. This inherits the following to be ensured:

  • The smooth functionality of the product on the most popular devices in the market.
  • The provision of the appropriate range of functions with the right characteristics for the target group.
  • Covering as many subjective use cases as possible to avoid negative surprises after go-live.

While the first point can still be tested internally and in the laboratory, for example with emulated devices, as part of a verification, the other two points can only be tested as part of a validation.

Crowdtesting offers solutions

Crowdtesting is the validation of digital products involving your target group – remotely via the internet. Leaving this rather rigid definition behind, this method offers good tools to meet the three challenges of digital assurance. It allows positioning towards the upper right quadrant of digital excellence and thus can serve to stand out from the masses with an outstanding product.

Figure. 1: The quadrants of digital excellence

Crowdtesting helps you to cover subjective use cases and perceptions in any phase of the life cycle. You get a direct insight into whether your target group feels heard and can adapt at any time. In addition, with the variety off devices and mindsets added to your testing process you will be enabled to find functional and technical issues which wouldn’t be uncovered in the lab. And if there are no functional problems, that’s worth a pat on the back for your development and builds confidence in your product.

Feedback will always be a part of this testing process and even if the insights and “bugs” gathered in this process may not be fixed, they can be incorporated into the further development of the product. In the meantime, the results help customer support to prepare for possible enquiries and to create meaningful FAQ lists.

Conclusion – Crowdtesting is useful in any phase of a products lifecycle

It gives a good insight into the technical and functional stability of your product and provides the opportunity to understand the (future) users from the beginning and develop with a focus on their added value. You don’t have to wait for feedback from customers who may be disappointed once, not return to your site at all and not using your app a second time.

Author

Johannes Widmann

Johannes Widmann has been working in the field of software quality and digital assurance for over 22 years. He is a dedicated desciple of crowdtesting since 2011 and has built up passbrains, one of the leading service providers for crowd-sourced quality assurance. Since January 2021 passbrains is part of the msg group.

MSG is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Quality Assurance, Uncategorized Tagged With: 2023, EuroSTAR Conference

Testing and QA Key to Cloud Migration Success

July 27, 2023 by Lauren Payne

Thanks to iOCO for providing us with this blog post.

In the global rush to go serverless and in the cloud, many organisations neglect quality assurance and testing – an oversight that can seriously impair performance and increase organisational risk.

There are numerous reasons for this, but a key one is that cloud migrations are complex projects usually managed by infrastructure teams. Those tasked with driving it aren’t always quality focused, and their views of what QA is might differ significantly from what QA should be.
Should the organisation neglect thorough testing as part of its application cloud migration plan, the smallest mistake left undiscovered, could cause major failures down the line.

Lift and shift migration, the most popular approach and the second-largest cloud services sector by revenue, should not be seen as a simple copy-and-paste operation. Without a concerted effort, accurate planning and coordinated migration testing, a copy-and-paste approach could have devastating consequences for scalability, databases, and application and website performance.

Cloud Migration Testing and QA Priorities and Pillars

Thorough cloud migration testing uses quantifiable metrics to pinpoint and address potential performance issues, as well as exposing opportunities to improve performance and user experience when applications are in the cloud. However, teams should be cautious of scope creep at this stage – adding new features during migration could have unforeseen impacts.

Proper testing and QA rests on four key pillars – security, performance, functional and integration testing.

Security testing must ensure that only authorised users access the cloud network, understanding who has access to the data, where, when and why users access data. It must address how data is stored when idle, what the compliance requirements are, and how sensitive data is used, stored or transported. Suitable procedures must also be put in place against Distributed Denial of Service (DDoS) attacks.

To realise the performance and scalability benefits of the cloud, testing must validate how systems perform under increased load. Unlike stress testing, performance testing verifies the end-to-end performance of the migrated system and whether response times fulfil service level agreements under various load levels.

Functional validates whether the application is ready to be migrated to the cloud, and whether it will perform according to the service level agreement. In complex applications, it is necessary to validate the end-to-end function of the whole application and its external services.

Even in basic applications where microservices architecture is not required, we see some sort of integration with third-party tools and services, making integration testing important. Therefore, cloud migration testing should identify and verify all the dependencies to ensure end-to-end functionality, and should include tests to verify that the new environment works with third-party services, and that the application configuration performs in a new environment.

With well-architected testing carried out, the organisation can rest assured that cloud migration risks have been mitigated and opportunities harnessed across security, operational excellence, reliability, performance efficiency, cost optimisation and sustainability.

A Testing and QA Framework for AWS Cloud Migration

As an AWS certified partner provider, iOCO has tailored our Well Tested Cloud Framework (WTCF) for cloud migration to align with the AWS Well Architected Framework, to ensure customer migrations to the AWS cloud are not only successful, but actually exceed expectations. iOCO resources will lead and manage execution from initial assessment, risk identification and recommendations; through a comprehensive set of checklists and guidelines across each of the four QA pillars; to full migration testing.

In tandem with the AWS Well Architected Framework, iOCO’s WTCF is designed to fast-track AWS migration testing using clear and structured guides and processes and customised options to suit the organisation’s budget and needs.

Author

Reinier Van Dommelen, Principal Technical Consultant – Software Applications and Systems at iOCO

As a seasoned Technical Consultant with a wealth of experience, Renier Schuld has a proven track record of delivering successful IT projects for a diverse range of clients. He excels at bridging the gap between business and technical requirements by identifying and implementing systems solutions, guiding cross-functional teams through the project life-cycle, and ensuring successful product launches.

Renier’s expertise in Testing is extensive and includes developing functional specification documents, designing test strategies, creating and executing test scripts to ensure accuracy and quality, developing project and organizational software test plans, providing user support, and building automated test frameworks. He has a passion for continuously improving processes and ensuring that quality is always top of mind throughout the project life-cycle.

iOCO is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Quality Assurance Tagged With: 2023, EuroSTAR Conference

  • Page 1
  • Page 2
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us