• Skip to main content

EuroSTAR Conference

Europe's Best Software Testing Conference

  • Programme
    • 2024 Programme Committee
    • 2023 Programme
    • 2023 Speakers
  • Attend
    • Venue & Hotels
    • Bring your Team
    • Testimonials
    • Why Attend
    • EuroSTAR Awards
    • Academic Discount
  • Partner
    • Download our EXPO Brochure
    • Partner Opportunities
    • Partner Testimonials
  • About
    • About EuroSTAR
    • FAQ
    • EuroSTAR Timeline
    • EuroSTAR Blog
    • Supporting Organisations
    • Contact Us
  • Book Now

Lauren Payne

Testing and QA Key to Cloud Migration Success

July 27, 2023 by Lauren Payne

Thanks to iOCO for providing us with this blog post.

In the global rush to go serverless and in the cloud, many organisations neglect quality assurance and testing – an oversight that can seriously impair performance and increase organisational risk.

There are numerous reasons for this, but a key one is that cloud migrations are complex projects usually managed by infrastructure teams. Those tasked with driving it aren’t always quality focused, and their views of what QA is might differ significantly from what QA should be.
Should the organisation neglect thorough testing as part of its application cloud migration plan, the smallest mistake left undiscovered, could cause major failures down the line.

Lift and shift migration, the most popular approach and the second-largest cloud services sector by revenue, should not be seen as a simple copy-and-paste operation. Without a concerted effort, accurate planning and coordinated migration testing, a copy-and-paste approach could have devastating consequences for scalability, databases, and application and website performance.

Cloud Migration Testing and QA Priorities and Pillars

Thorough cloud migration testing uses quantifiable metrics to pinpoint and address potential performance issues, as well as exposing opportunities to improve performance and user experience when applications are in the cloud. However, teams should be cautious of scope creep at this stage – adding new features during migration could have unforeseen impacts.

Proper testing and QA rests on four key pillars – security, performance, functional and integration testing.

Security testing must ensure that only authorised users access the cloud network, understanding who has access to the data, where, when and why users access data. It must address how data is stored when idle, what the compliance requirements are, and how sensitive data is used, stored or transported. Suitable procedures must also be put in place against Distributed Denial of Service (DDoS) attacks.

To realise the performance and scalability benefits of the cloud, testing must validate how systems perform under increased load. Unlike stress testing, performance testing verifies the end-to-end performance of the migrated system and whether response times fulfil service level agreements under various load levels.

Functional validates whether the application is ready to be migrated to the cloud, and whether it will perform according to the service level agreement. In complex applications, it is necessary to validate the end-to-end function of the whole application and its external services.

Even in basic applications where microservices architecture is not required, we see some sort of integration with third-party tools and services, making integration testing important. Therefore, cloud migration testing should identify and verify all the dependencies to ensure end-to-end functionality, and should include tests to verify that the new environment works with third-party services, and that the application configuration performs in a new environment.

With well-architected testing carried out, the organisation can rest assured that cloud migration risks have been mitigated and opportunities harnessed across security, operational excellence, reliability, performance efficiency, cost optimisation and sustainability.

A Testing and QA Framework for AWS Cloud Migration

As an AWS certified partner provider, iOCO has tailored our Well Tested Cloud Framework (WTCF) for cloud migration to align with the AWS Well Architected Framework, to ensure customer migrations to the AWS cloud are not only successful, but actually exceed expectations. iOCO resources will lead and manage execution from initial assessment, risk identification and recommendations; through a comprehensive set of checklists and guidelines across each of the four QA pillars; to full migration testing.

In tandem with the AWS Well Architected Framework, iOCO’s WTCF is designed to fast-track AWS migration testing using clear and structured guides and processes and customised options to suit the organisation’s budget and needs.

Author

Reinier Van Dommelen, Principal Technical Consultant – Software Applications and Systems at iOCO

As a seasoned Technical Consultant with a wealth of experience, Renier Schuld has a proven track record of delivering successful IT projects for a diverse range of clients. He excels at bridging the gap between business and technical requirements by identifying and implementing systems solutions, guiding cross-functional teams through the project life-cycle, and ensuring successful product launches.

Renier’s expertise in Testing is extensive and includes developing functional specification documents, designing test strategies, creating and executing test scripts to ensure accuracy and quality, developing project and organizational software test plans, providing user support, and building automated test frameworks. He has a passion for continuously improving processes and ensuring that quality is always top of mind throughout the project life-cycle.

iOCO is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Quality Assurance Tagged With: 2023, EuroSTAR Conference

Moving Beyond Traditional Testing: The Need for Autonomous Testing in Software Development.

July 24, 2023 by Lauren Payne

Thanks to Hexaware for providing us with this blog post.

Software testing is struggling to keep up with the fast-paced and constantly accelerating rate of releases. According to a survey by Gitlab in 2022, seven out of ten developers reported that their teams release code at least every few days, with many doing so on a daily basis. In today’s world, customers and end-users expect new features and functionality at an increasingly rapid pace. Companies that lag behind on new software releases risk being displaced by competitors who can keep up with the latest updates.

When testing fails to keep up with the release pace, organizations face well-known risks associated with releasing software that hasn’t been adequately tested and may contain bugs. For instance, in July 2022, former Volkswagen CEO Herbert Diess was forced out of the company because the automaker’s software unit was unable to produce software of sufficient quality, delaying the launch of its new Porsche, Audi, and Bentley models. Even more recently, in October 2022, Nintendo had to take Nintendo Switch Sports’ servers offline for nearly a week due to a bug that caused the game to crash.

Development teams have attempted to address this dilemma of either issuing potentially buggy software faster or slowing down to test sufficiently with test automation. However, there are significant challenges associated with how test automation is traditionally implemented, and automation still requires highly skilled testers who are always in high demand, making them difficult to hire and retain.
Testing organizations face challenges beyond just automating the creation of tests. Maintaining tests is equally challenging as automation scripts can become outdated and fail to test the required functions in the desired ways. Even with enough testers available, analyzing the impact of changes and configuring the test suite is too complicated to be performed manually. However, the problem extends beyond maintaining automated tests as human analysis cannot identify all areas that require testing.

To overcome these challenges, organizations need to move beyond automation and embrace autonomous testing.

AI-Powered Autonomous Testing

Autonomous testing is the solution to the challenges faced by testing organizations as it enables faster decision-making about which scenarios to test based on the impact of a change without relying too much on human involvement. This dramatically increases testing depth and scope while simultaneously speeding up the process.

In contrast, traditional test automation only addresses one stage of the testing process, which is the automated script execution in the DevOps pipeline, as illustrated in Figure 1.

Traditional Testing Process
Automation Beyond the DevOps Pipeline

Autonomous testing has the potential to significantly reduce the need for human involvement throughout the testing process (as shown in Figure 2), unlike traditional test automation, which only impacts script execution in the DevOps pipeline (as shown in Figure 1). By utilizing natural language processing (NLP) and machine learning (ML) technologies, organizations can automate the generation of feature files and autonomous scripts. With the addition of deep learning through a support vector machine (SVM), tests can be auto-configured, and cases can be identified for execution when there are changes to code or requirements. Autonomous testing can also perform failure analysis and take corrective action.

As the AI continues to learn from development behavior, test results, and other data, it becomes smarter and more accurate. For example, post-production logs are rarely used, but AI can analyze them and match them to post-production to identify previously unidentified “white spaces” that are likely to have bugs in the future and therefore require testing.

It is crucial to understand that autonomous testing is not a one-time fix, but a continual process of improvement, one case at a time. Organizations can start by identifying a specific bottleneck in the testing process that autonomous testing can address, such as the generation of UI/API scripts or identifying sensitive columns that require masking or synthetic data replacement. Ideally, the case should involve different functions for a particular phase to have a more significant impact. Once the solution has been successfully implemented and shown results, organizations can leverage that success to expand to a new case and continue to improve their testing process over time.

Think of it in terms of autonomous driving. Automakers first rolled out discrete capabilities such as automatic breaking to avoid hitting a stationary object, lane assist, and adaptive cruise control. Implementing autonomous testing requires a similar approach.

Organizations are under pressure to conduct extensive testing within a shorter time frame and with fewer resources, all while delivering high-quality software on schedule. Autonomous testing, powered by AI and ML, can help organizations achieve this goal, but it requires a strategic, long-term approach to implementation. The ultimate outcome is that development teams can release new features more frequently, leading to a better customer experience and a stronger bottom line for the organization.

Learn More

Listen to a Thoughtcast that answers key questions about autonomous software testing and explainshow to move seamlessly from automation.

Reach us at [email protected] for more information.

Nagendra BS, Vice President – Digital Assurance, Practice & Solutions at Hexaware

Nagendra has around 21 years of experience in Software industry and is passionate about Quality and Testing and have helped number of customers in their testing transformation journey. He is currently responsible for Go to Market function of the Digital Assurance (Testing) business which includes creation of all service offerings, global presales support, alliances, analyst and marketing functions for Digital Assurance services.

Hexaware is an EXPO exhibitor at EuroSTAR 2023

Filed Under: Development, Test Automation Tagged With: EuroSTAR Conference

We’ve got the Stage – You’ve got the Story

July 17, 2023 by Lauren Payne

The 2024 EuroSTAR Software Testing Conference is going to Stockholm, Sweden.

If you’ve ever wanted to speak at EuroSTAR and share your story on Europe’s largest stage, the Call for Speakers is open until 17th September.

Now is the time to start thinking about what you’d like to share. What experiences will help others in the room. Perhaps it’s something that didn’t work at first but then you found a solution. It might be technical, or it might core skills.

EuroSTAR 2024 Programme Chair, Michael Bolton, is inviting you to explore the theme, ‘What Are We Doing Here?’ – it’s a wide-open question, with lots of possible interpretations and related questions.

Talk Type

We’ll share more on these later but for now, there will be three main types of talks:

  • Keynote – 60mins (45mins talk + 15mins Q&A)
  • Tutorials/Workshops – Full-day 7 hours OR Half-day 3.5 hours incl breaks
  • Track Talks – 60mins (40mins talk + 20mins valuable discussion)

Who?

Calling all testing enthusiasts and software quality advocates – whether you’re a veteran, or new to testing – to share your expertise, successes (and failures) with your peers; and spark new learnings, lively discussions, and lots of inspiration.

Think about what engages you in your work, engrosses you in testing, challenges you’ve faced, or new ideas you’ve sparked? Get in front of a global audience, raise your profile, and get involved with a friendly community of testers.

Here’s everything you need to know about taking the first step on to the EuroSTAR stage.

We invite speakers of all levels to submit their talk proposals and take the biggest stage in testing!

What Do I Need To Submit?

A clear title, a compelling abstract and 3 possible learnings that attendees will take from your talk – this is the main part of your submission. We’ll ask you to add in your contact details and tick some category boxes but your title, talk outline & key learnings are the key focus.

Topics for EuroSTAR 2024

Michael is calling for stories about testers’ experiences in testing work. At EuroSTAR 2024, we embrace diversity and value a wide range of perspectives. We’re most eager to hear stories about how you…

  • learned about products
  • recognised, investigated, and reported bugs
  • analysed and investigated risk
  • invented, developed, or applied tools
  • developed and applied a new useful skill
  • communicated with and reported to your clients
  • established, explained, defended, or elevated the testing role
  • Created or fostered testing or dev groups
  • recruited and trained people
  • made crucial mistakes and learned from them
START Your Submission

Mark Your Calendar

Here are some essential dates to keep in mind:

  • Call for Speakers Deadline: 17 September 2023
  • Speaker Selection Notification: Late November 2023
  • EuroSTAR Conference: 11-14 June 2024 in Sweden

If you’re feeling inspired, check out the full Call for Speakers details EuroSTAR attracts speakers from all over the world and we can get over 450 submissions. Each year, members of the EuroSTAR community give their time to assess each submission and their ratings help our Programme Committee select the most engaging and relevant talks. If you would like help writing a proposal see this handy submissions guide and you can reach out to us at any time.

EuroSTAR 2024 promises to be an extraordinary experience for both speakers and attendees. So, submit your talk proposal before 17 September 2023 and let’s come together in the beautiful city of Stockholm next June. Together we’ll make EuroSTAR 2024 an unforgettable celebration of software testing!

Filed Under: EuroSTAR Conference, Software Testing, Uncategorized Tagged With: EuroSTAR Conference

How to calculate whether QA tests should be Automated or Manual

July 13, 2023 by Lauren Payne

Thanks to Global App Testing for providing us with this blog post.

In a recent webinar with the easy CI/CD tool Buddy Works, we looked at how businesses can calculate the true cost of testing and use it to determine whether tests should be automated or manual. You can check out our thinking on the subject below .👇

Why do businesses believe they will automate so many tests?

In TestRail’s first annual survey in 2018, businesses set out their plans for test automation. The 6,000 respondents automated 42% of their tests and planned to automate a further 21% next year. 

But they didn’t. In the 2019 survey, the same 42% of tests were automated, and this time, businesses said they would automate 61% in 2020. By the most recent survey in 2021, just 38% of tests were automated. By now, the pattern is consistent. Businesses systematically overestimate the amount they will automate. 

But why?

Why businesses like test automation

Teams tend to like the idea of automating tests. That’s because:

  • You can run automated tests whenever you like
  • Automated tests return results instantly
  • Automation is perceived as a one-time investment, which would make it cheaper to automate over the long term. (In our experience, this is only sometimes true.) 

And then together these factors lead to even better second-order effects: 

  • You can remove bottleneck slowing down your releases if your tests are instant 
  • You can improve your DORA metrics as you measure your progress. 

But the reality of testing difficulty belies this. We ran a survey during a separate webinar about the top reasons businesses felt they couldn’t automate more tests. And here’s the TLDR: 

  • The top result (28%) of respondents cited flaky tests due to a changing product. The second result (26%) is not enough time to automate.  
  • Both answers are time. “Flaky tests due to a changing product” really refers to the time investment of maintaining your tests. “Not enough time to automate” refers to the time investment of setting them up.
  • Businesses are underequipped to calculate the time costs of building and maintaining tests, or the other time demands which will be made of them in the cut and thrust of product development. 

What’s the equation to calculate whether a manual or automated test is better?

setuptime1

ST + (ET x N) = the true time cost of testing.

You can check this for automated and manual tests to identify whether it’s cheaper for your business to execute a test manually or to automate it. 

ET is the execution time. We know that automation is much faster here, and it’s the main metric businesses focus on when they want to automate all their tests. For Global App Testing, we offer 2-6 hour test turnaround with real time results. Tests land in tester inboxes straight away, so in many cases the first results come through much faster.

ST is the setup time including any maintenance time investment. It takes more time to automate a test script than it does to quickly test something or to send it to a crowdtester like Global App Testing. Setup time is also the second barrier to setting up tests, so it’s worth running this algorithm twice – one to add up which is more expensive, and one with adapted algebra to calculate the maximum time your business can invest in one go. 

N is the number of times a test will be used before it flakes. It’s great that execution on an automated test is very rapid; but the saving is immense on a test used 1000s of times. If the test will be used twice before it flakes, the return is less impressive.

A final note is to ensure you know what you’re optimizing for. Is time or money more important? The labour costs of the individuals setting up the automated test (developers) versus the labour costs of individuals executing tests (global QA professionals) could be different; and try running this algorithm with both units plugged in/.

Author

Adam Stead

Adam is the editor-at-large at Global App Testing. He has written extensively about technology business and strategy for a variety of businesses since 2015.

Global App Testing is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp.

Filed Under: Test Automation Tagged With: 2023, EuroSTAR Conference

5 Steps to help build your load testing strategy

July 10, 2023 by Lauren Payne

Thanks to Gatling for providing us with this blog post.

You might have already started load testing, which is awesome! But if you haven’t, and you’re wondering when, where and how to start the answers are all here for you. To help you get set up we’re going to give you a few tips and tricks to build your load testing strategy and make sure that you’re set for success. Ready to dive in? Read on!

Know Your User

The most important part of load testing is knowing your user but more specifically what you need to know are the answers to a few key questions.

How are your users using your site/application? 

Most enterprises have an idea of how they’d like their users to use their site or products but for many how they’re actually using it and the journeys they take when they’re using it are a bit of a mystery. By using different tracking software such as Mixpanel or Amplitude though you can get a very detailed idea of what journeys your users are taking on your site and craft simulations to match and replicate this.

Understanding Your Traffic

Crafting great user journeys is the first step in building a scenario. Understanding your traffic though will help you decide what kind of tests you need to create. By using tools like Google analytics, Google Search Console, SEM rush or just monitoring your server usage you should be able to get an idea of what kind of traffic you’re receiving and how you’re receiving it. Are you getting sudden surges of traffic? Run a stress test! Are you getting long durations of constant traffic? Run a soak test. For every traffic scenario you can run a battery of different tests to ensure that your website is resilient enough to withstand the traffic it’s receiving. To learn more about the different kinds of load tests you can run and get an idea about what might work best for you check out our post here.

Continuous Integration

You’ve built your tests and run them, you’re doing great! However, most websites and applications are constantly changing and upgrading. How can you be sure that the changes you’re making aren’t going to change the performance of your project? By introducing load testing into your CI/CD project. We wrote a detailed post on the benefit of using Gatling Enterprise Cloud to integrate load testing into your CI/CD process. Gatling’s Enterprise version allows you to integrate with almost any CI/CD software, whether you’re using one of our dedicated integrations or using our CI Script to create your own.

Plan For The Unexpected

One of the great things about load testing is its ability to prepare you for any eventuality. You might not have thousands of users hitting your application today but by creating tests, and running them you can be sure that if it does happen you’re prepared. So when creating your testing strategy and examining your traffic it’s important not to just consider what is happening right now but also what could happen. What’s the worst/best case scenario? Are you prepared? Make sure by testing and you’ll know that whatever happens you’ll be ready.

By following these tips, this will help ensure that your websites and applications are able to handle the traffic and workloads that they will encounter in the real world, and it will help prevent performance issues that could impact the user experience. 

LINKS
or just monitoring your server https://hubs.ly/Q01DYDSv0 
sudden surges of traffic?https://hubs.ly/Q01DYH_L0 
out our post herehttps://hubs.ly/Q01DYK9B0 
Gatling Enterprise Cloud to integrate load testing into your CI/CD process.https://hubs.ly/Q01DYL720 

Author

Pete Dutka, Customer Success Manager, Gatling.

Gatling Enterprise provides advanced features to help you get ahead of downtime and technical issues related to your website traffic. Our advanced reports allow you to dive into the details and discover your application’s limits and performance bottlenecks. We offer both on-premise and SaaS solutions to meet your business needs, whatever they may be.

Gatling is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Software Testing Tagged With: 2023, EuroSTAR Conference

Testing in Agile: A Few Key Points to Consider

June 28, 2023 by Lauren Payne

Thanks to CTG for providing us with this blog post.

What is Agile Testing?

This may come as a surprise, but there really is no such thing as “Agile Testing.” Let’s break this down into two terms: Agile and Testing.

Agile is a development approach that has been widely adopted since the early 2000s, whereas Testing is a process that determines the quality of a software product. The basic principle in software testing is that testing is always context dependent. In other words, you must adapt your process, activities, and objectives in order to align them with your business context.

How Does Testing in Agile Differ From Traditional Approaches?

The main difference between an Agile approach and a more traditional approach with respect to testing lies in the ever-changing, fast-paced, and continuous character of testing.

With Agile, the objective is to deliver value as fast as possible to the stakeholders. Since an Agile approach embraces change, the concept of value itself can change between sprints or iterations.

When traditional approaches are applied, there is always a period between the analysis and test execution phases where developers are performing their magic. During this period, testers review, evaluate and analyze the documentation at hand, trying to prevent any defects from entering the code as well as preparing their test case design.

In an iterative or incremental approach, such as Agile, such a period does not exist. Every member of the Agile team is considered multi-disciplinary and must therefore be able to perform any tasks within the team. It simply does not matter whether it’s analysis, development, or testing. Given the lack of time to prepare test cases upfront, testing becomes less scripted and more explorative.

Finally, due to the circular motion, a lot of the testing work is redundant. In a traditional approach, the code is stable and frozen when testing starts. As a result, a test that passed 4 weeks ago should still pass.

In an Agile approach, requirements, user stories, product backlog items (PBI), may undergo significant changes in between iterations, based on customer feedback. To ensure that new functionalities do not break the existing solution, rigorous regression testing is required within every iteration, lowering the bandwidth for testing new functionalities.

What Skills do Testers Need & What Roles do they Play in Agile Projects?

Whether its an Agile approach or a traditional approach, the skills that testers need are largely identical. We can organize these testing skills into 4 categories:

  • Business or domain knowledge: Understanding the context of the work or project
  • IT knowledge: General understanding of all the other roles and activities
  • Testing knowledge: How to deduct test cases and how to execute them.
  • Soft skills: How to deduct analytical skills, communication skills, empathy, and a critical mindset.

In fact, testers should feel more at home in an Agile team, as they are more in control. Testers can pull work from the backlog when they are ready as opposed to a traditional approach, where work is pushed to them whether they are ready or not.

What is the Best Way To Assess Quality Risks?

When it comes to Agile, collaboration and communication are key. Every requirement contains a risk to the product. By writing and reviewing the requirements together (i.e. collaborative user story writing) with developers, analysts and testers, all stakeholders are made aware of possible risks.

It is important to note that not all risks carry the same weight and mitigating them can occur through different means. Lower-level risks associated with a specific product backlog item (PBI) can be addressed in its acceptance criteria. Product risks on a higher level than a single user story can be mitigated in quality gates such as the definition of ready (DOR) and the definition of done (DOD).

The same principle applies for estimation of time. However, Agile team members do not estimate the time required to perform a certain task. Due to the risk of anchoring, it is better to assess tasks on a PBI level using story points. These fictive, relative, values express the total effort required by the entire team to complete the task. It’s not the sum of analysis, development, and testing in the most favorable circumstances, but rather the team’s evaluation of how much effort the task would require for any given team member to complete.

3 Ways to Enhance your Understanding in Agile Projects

Like anything in life, improving your understanding in Agile projects requires deliberate actions. Here are 3 ways you can enhance your knowledge:

  • Join an Agile team

Practices makes perfect. Joining an Agile team is a great way to gain valuable exposure and experience to Agile principles in order improve one’s understanding and proficiency.

  • Follow Agile trainings

Regardless of your field or profession, learning should never stop. Participating in Agile trainings can allow you to learn more about Agile, which you can then apply in the real world.

  • Read great Agile resources

Finally, it is never a bad idea to pick up any of the great literature about Agile. Perhaps less interactive than the first two suggestions, reading about Agile makes it possible to learn from some of leading Agile specialists.

Interested in expanding your agile skills, experience, or know-how? CTG Academy offers both in-person and online training dedicated to help those working in agile. Discover our agile trainings and take your projects to the next level.

Want to know more on agile? Discover our agile service or contact us!

Author

EuroSTAR 2023 Michaël Pilaeten

Michael Pilaeten , Learning and Development Manager

Breaking the system, helping to rebuild it, and providing advice and guidance on how to avoid problems. That’s me in a nutshell. With 17 years of experience in test consultancy in a variety of environments, I have seen the best (and worst) in software development. In my current role as Learning & Development Manager, I’m responsible for guiding our consultants, partners, and customers on their personal and professional path towards excellence. I’m chair of the ISTQB Agile workgroup and international keynote speaker (United Kingdom, France, Spain, Peru, Russia, Latvia, Denmark, Armenia, Romania, Belgium, Holland, Luxembourg).

CTG is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Agile Tagged With: 2023, EuroSTAR Conference

Did We Do the Right Tests?

June 26, 2023 by Lauren Payne

Experiences with Test Gap Analysis in Practice

Thanks to CQSE for providing us with this blog post.

Most errors occur in code that has been changed lately (e.g., since the last release of a software system) [1,2]. This is of little surprise to practitioners, but how do we ensure that our tests cover all such changes, in order to catch as many of these defects as possible?

Do Tests Cover Code Changes in Practice?

In order to better understand to which degree tests actually cover changes made to a software system, we tracked development and testing activity on an enterprise information system, comprising of about 340k lines of C# code, over a period of 14 months, corresponding to two consecutive releases [1].

Through static code analysis, we determined that for each of these releases, about 15% of the source code were either newly developed or changed. Using a profiler, we recorded the code coverage of all testing activities, including both automated and manual tests. This data showed that approximately half of the changes went into production untested – despite a systematically planned and executed testing process.

To quantify the consequences of untested changes for users of the software, we then reviewed all errors reported in the months following the releases and traced them back to their root causes in the code. We found that changed, untested code contains five times more errors than unchanged code (and also more errors than changed and tested code).

This illustrates that, in practice, untested changes very frequently reach production and that they cause the majority of field errors. We may, thus, systematically improve test quality, if we manage to test changes more reliably.

Why Do Changes Escape Testing?

The amount of untested production code we found in our study actually surprised us, when we originally conducted this study. Therefore, we wanted to understand why this many changes escape testing.

We found that the cause of these untested changes is – to the contrary of what you may assume – not a lack of discipline or commitment on the testers’ part, but rather the fact that it is extremely hard to reliably identify changed code manually, when testing large systems.

Testers often rely on the description of individual issues from their issue tracker (e.g., from Jira or Azure DevOps Boards), in order to decide whether some change has been sufficiently tested. This works well for changes made for functional reasons, because the issues describe how the functionality is supposed to change and it is relatively easy to see which functionality a test covers.

However, there are two reasons why issue trackers are not suitable sources of information in consistently finding changes:

  • First, many changes are technically motivated, for example, clean-up operations or adaptations to new versions of libraries or interfaces to external systems. Respective issue descriptions do not clarify which functional test cases make use of the resulting changes.
  • Second, and more importantly, the issue tracker often simply does not document all important changes, be it because someone forgot or did not find the time to update the issue description or because someone made changes they were not supposed to make, e.g., due to a policy that is currently in place.

Thus, we need a more reliable source to determine what has been changed. Only then, we can reason about whether these changes have been sufficiently tested.

Test Gap Analysis to the Rescue!

Test Gap Analysis is an approach that combines static analysis and dynamic analysis to identify changed-but-untested code.

First, static code analysis compares the current state of the source code of the System under Test to that of the previous release in order to determine new and changed code areas. In doing so, the analysis filters out refactorings, which do not modify the behavior of the source code (e.g., changes to documentation, renaming of methods or moving of code) and, thus, cannot cause new errors. The remaining code changes lead to a change in the behavior of the system.

For the enterprise information system from before, all changes for one of the releases we analyzed are depicted on the following tree map. Each rectangle represents a method in the source code and the size of the rectangle corresponds to the method’s length in lines of source code. We distinguish unchanged methods (gray), from new methods (red) and modified methods (orange).

Figure 1: A treemap showing all changes made to an enterprise information system for a single release. About 15% of the code was added (red) or modified (orange), while the rest remained unchanged (gray).

Second, dynamic analysis captures code coverage (usually through a coverage profiler). The crucial factor here is that all tests are recorded, across all test stages and regardless of whether they are automated or manually executed.

We use the same tree map as above, to visualize the aggregated code coverage at the end of the test phase. This time, we distinguish between methods that were executed by at least one test (green) and methods that were not (gray).

Figure 2: A treemap showing the aggregated code coverage of all tests (manual and automated) performed on an enterprise information system. Methods that were executed by at least one test (green) and methods that were not (gray).

Third, Test Gap Analysis detects untested changes by combining the results of the static and dynamic analyses. Again, we use our tree map to visualize the results, distinguishing methods that remain unchanged (gray) from changed-and-tested methods (green), untested new methods (red) and untested changed methods (orange).

Figure 3: A treemap showing the results of a Test Gap Analysis on an enterprise information system. A large part of the code remained unchanged (gray), while about 15% changed (in color). Of the changes, about half was tested (green), while the other half remained untested (red and orange) – despite a systematically planned and executed testing process.

It is plain to see that whole components containing new or changed code were not executed by even a single test in the testing process. No errors contained in this area can have been found in the tests!

Using Test Gap Analysis

Test Gap Analysis is useful when executed regularly, for example, every night, to gain insights each morning into the executed tests and changes made up until the previous evening. Each day, an updated Test Gap treemap, e.g., on a dashboard, then helps test managers decide whether further test cases are necessary to run through the remaining untested changes. This creates an ongoing feedback loop to steer the testing efforts and make informed decisions.

Figure 4: Test Gap Analysis gives continuous feedback to help test managers steer the testing efforts.

Which Projects Benefit from Test Gap Analysis?

We have used Test Gap Analysis on a wide range of different projects: from enterprise information systems to embedded software, from C/C++ to Java, C#, Python and even SAP ABAP. Factors that affect the complexity of the introduction are, among others:

  • Execution environment. Virtual machines (e.g., Java, C#, ABAP) simplify the collection of test coverage data.
  • Architecture. The test-coverage data for a server-based application has to be collected from fewer machines than that for a fat-client application, for example.
  • Testing process. Clearly defined test phases and environments facilitate planning and monitoring.

Using Test Gap Analysis During Hotfix Testing

The objectives of hotfix tests are to ensure that the fixed error does not re-occur and that no new errors have been introduced. To achieve the latter, we should at least ensure we tested all changes made in the course of the hotfix. Usually, there is very little time to achieve this.

With Test Gap Analysis, we may define the release state (before the hotfix) as the reference version and detect all changes made due to the hotfix (for example, on a dedicated branch). We then determine whether all changes were actually tested during confirmation testing. A Test Gap tree map immediately shows whether there are any untested changes left.

Figure 5a: Changes made during a hotfix (in color).
Figure 5a: Changes made during a hotfix (in color).
Figure 5b: Remaining untested changes (orange & red) and tested changes (green).

In our experience, Test Gap Analysis specifically helps avoid new errors that are introduced through hotfix changes.

Using Test Gap Analysis During a Release Test

For this scenario, we define a release test as the test phase prior to a major release, which usually involves both testing newly implemented functionality and executing regression tests. Often, this involves different kinds of tests on multiple test stages.

Figure 6: Split of development and release-test phases.

In the introduction to Test Gap Analysis above, we’ve looked at the results of running Test Gap Analysis at the end of a release test of an enterprise information system. These results revealed glaring gaps in the coverage of changes, after a test phase without using Test Gap Analysis to guide the testing efforts.

From that point onwards, Test Gap Analysis became an integral part of the testing process and was executed regularly during subsequent release tests. The following is a snapshot of the Test Gap Analysis during a later release test. It is plain to see that it contains much fewer Test Gaps.

Figure 7a: Test Gaps after a release test without Test Gap Analysis.
Figure 7b: Test Gaps after a release test guided by Test Gap Analysis.

If testing happens in multiple environments simultaneously, we may run Test Gap Analysis for each individual environment separately. And at the same time, we may run Test Gap Analysis globally, to assess our combined testing efforts. The following example illustrates this for a scenario with three test environments:

  • Test is the environment in which testers carry out manual test cases.
  • Dev is the environment where automated test cases are executed.
  • UAT is the User Acceptance Test environment, where end users carry out exploratory tests.
  • All combines the data of all three test environments.
Figure 8: Results of Test Gap Analysis by test environment and aggregated over all environments.

We observed that, in many cases, some Test Gaps are accepted, for example, when the corresponding source code is not yet reachable via the user interface. The goal of using Test Gap Analysis is not to test every single change at all cost. The key is that we can make conscious and well-founded decisions with predictable consequences about what to test.

In our experience, Test Gap Analysis significantly reduces the amount of untested changes that reach production. In a study with one of our customers, we found that this reduces the number of errors in the field by as much as 50%.

Using Test Gap Analysis Alongside Iterative Development

Today, fewer and fewer teams work with dedicated release tests, like in the previous scenario. Instead, issues from their issue trackers move into the focus of test planning, to steer testing efforts alongside iterative development.

In this scenario, testers are responsible to test individual issues in a timely manner after development finishes. As a result, development and testing interleave and dedicated test phases become obsolete or much shorter.

Figure 9: Development and test phases in iterative development processes.

At the same time, it becomes even harder to keep an eye on all changes, because much of the work typically happens in isolation, e.g., on dedicated feature branches, and gets integrated into the release branch only on very short notice. All the more, we need a systematic approach to keep track of which changes have been tested and in which test environments.

Fortunately, we may run Test Gap Analysis also on the changes made in the context of individual issues. All we need to do is single out the changes that happened in the context of any particular issue, which is straightforward, e.g., if all changes happen on a dedicated feature branch or if developers annotate changes with the corresponding issue numbers when committing them to the version control system. Once we grouped the changes by issue, we simply run Test Gap Analysis for each of them.

Figure 10: Overview of Issue Test Gaps for the issues in the current development iteration.

Limitations of Test Gap Analysis

Like any analysis method, Test Gap Analysis has its limitations and your knowledge of them is crucial for making the best use of the analysis.

One of the limitations of Test Gap Analysis are changes that are made on the configuration level without changing the code itself. These changes remain hidden from the analysis.

Another limitation of Test Gap Analysis is the significance of processed code. Test Gap Analysis evaluates which code was executed during the test. It cannot figure out how thoroughly the code was tested. This potentially leads to undetected errors despite the analysis depicting the executed code as “green”. This effect increases with the coarseness of the measurement of code coverage. However, the reverse is as simple as it is true: red and orange code was not executed in tests, thus, no contained errors can have been found.

Our experience in practice shows that the gaps brought to light when using Test Gap Analysis are usually so large that we gain substantial insights into weaknesses in the testing process. With respect to these large gaps, the limitations mentioned above are insignificant.

Further Information

Test Gap Analysis may greatly enhance the effectiveness of testing processes. If you would like to learn more about how Test Gap Analysis works in our analysis platform Team scale, the first tool that offered Test Gap Analysis and, to date, the only tool providing Test Gap tree maps, as you have seen them above, check out our website on Test Gap Analysis or join our next workshop on the topic (online & free)!

References

[1] Sebastian Eder, Benedikt Hauptmann, Maximilian Junker, Elmar Juergens, Rudolf Vaas, and Karl-Heinz Prommer. Did we test our changes? assessing alignment between tests and development in practice.
In Proceedings of the Eighth International Workshop on Automation of Software Test (AST’13), 2013.

https://www.cqse.eu/publications/2013-did-we-test-our-changes-assessing-alignment-between-tests-and-development-in-practice.pdf

[2] N. Nagappan, Th. Ball, Use of relative code churn measures to predict system defect density, in: Proc. of the 27. Int. Conf. on Software Engineering (ICSE) 2005

Authors

Dr Elmar Juergen EuroSTAR 2023 Speaker

Dr. Elmar Jürgens

([email protected]) is founder of CQSE GmbH and consultant for software quality. He studied computer science at the Technische Universität München and Universidad Carlos III de Madrid and received a PhD in software engineering.

Dr. Dennis Pagano

([email protected]) is consultant for software and systems engineering at CQSE. He studied computer science at Technische Universität München and received a PhD in software engineering from Technische Universität München. He holds two patents.

Dr. Sven Amann

([email protected]) is a consultant of CQSE GmbH for software quality. He studied computer science at the Technische Universität Darmstadt (Germany) and the Pontifícia Universidade Católica do Rio de Janeiro (Brazil). He received his PhD in software technology from Technische Universität Darmstadt.

CQSE is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp.

Filed Under: Software Testing Tagged With: 2023, EuroSTAR Conference

Why Test Reporting Should be a Top Priority in Your Software Development Process

June 21, 2023 by Lauren Payne

Thanks to b.ignited for providing us with this blog post.

In the world of software development, testing is an essential part of the process. It is through testing that we can ensure that the software being developed is fit for use, meets requirements, and is ready for release. However, there are some situations where test reporting does not reach management, which is a problem. Do you wonder why this might happen? What are the consequences? Because believe me, there are consequences! And most importantly, what can be done to avoid all of this?
To understand the importance of test reporting, it is important to understand what a good test report consists of. The report summarizes the results and findings of a testing process. It provides a comprehensive view of the testing activities, including the objectives, scope, and methodology of the testing, as well as the test cases, test scripts, and test data used. It serves as a formal record of the testing activities and provides stakeholders with a clear understanding of the quality of the product or system being tested. It is an important tool for decision-making, as it can help stakeholders determine whether the product or system is ready for release or further testing is required.

Why does test reporting not reach management?

There are a few reasons why test reporting might not be available to management.
One reason is that the testing team does not have the necessary resources to produce reports. This could be due to a lack of personnel, time or funding. Another reason could be that they do not recognize the value of producing reports. They may believe that their work speaks for itself, and that there is no need to provide additional documentation.


Another reason could be that the development team is focused on meeting deadlines and releasing software quickly. In this case, the testing team may not have enough time to produce reports and meet their other responsibilities.
Or it could be that the team is not aware of the importance of test reporting to management. They may not realize that management needs this information to make informed decisions about the software development process.

What are the consequences of not reporting?

There are several consequences of not reporting test results to management. One of the most significant consequences is that management will not have a clear view of the quality of the software being developed. Without this information, they may make decisions that are not in the best interest of the company. For example: releasing software that has not been adequately tested leads to customer complaints, negative reviews and even legal action.


Another consequence of not reporting is that the testing team may not receive the recognition they deserve for their hard work. When management is not aware of the effort of the testers, they may not appreciate the value of their work, leading to lower morale and decreased job satisfaction.

Not reporting test results can lead to a breakdown in communication between the testers and the other members of the development team. This can make it more difficult to identify and fix bugs, leading to longer development times and higher costs.

How can you avoid not reporting test results?

There are 4 simple steps:

  • The first step is to ensure that the testing team has the necessary resources to produce reports. This might involve hiring additional personnel, providing more time for reporting, or increasing funding for testing activities.
  • The second step is to educate the testers about the importance of test reporting to management. By explaining how this information is used, the testing team will be more motivated to produce reports.
  • The third step is to make sure that reporting is integrated into the software development process. This might involve using automated tools to generate reports or creating templates that make it easy for the testing team to produce reports quickly.
  • And the fourth and final step is to ensure that there is open communication between the testers and the other members of the development team. By sharing test results and collaborating on solutions, the development process can be more efficient and effective.

Test-Automation-as-a-Service: your test reporting solution

At b.ignited, we are convinced that there is yet another solution to ensure that test reporting is always up-to-date, namely using ‘b.ignition’. b.ignition is an in-house developed tool, with an underlying cloud architecture to provide test reporting. Users can log in via a portal and thus view and compare all information of current and historical test results. There is always an overview available of the test results status across projects. If necessary, a new test run can be started from the same portal, and the results are immediately included in the overview. b.ignition is set up in such a way that the customer can choose between a private or a public cloud, depending on the desired data security.

Understanding the value of test reporting

In conclusion, not reporting test results to management can have significant consequences for the software development process. By understanding why this might happen and taking steps to avoid it, you can ensure that the software being developed is of the highest quality and meets the needs of the customer. It is essential to recognize the value of test reporting to management and to make it a priority in the software development process.

Author

Patrick-Van-Ingelgem

Patrick Van Ingelgem, Managing Partner at b.ignited

Founded the company in 2018 after several years of experience in Test automation, -coordination and -management. He motivates his colleagues from b.ignited to always be on top of technology, and strongly believes in the power of knowledge and information. That’s why the topic on Test reporting is so important for him.

 b.ignited is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp.

Filed Under: Software Testing Tagged With: 2023, EuroSTAR Conference

  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us