• Skip to main content

EuroSTAR Conference

Europe's Best Software Testing Conference

  • Programme
    • 2024 Programme Committee
    • 2023 Programme
    • 2023 Speakers
  • Attend
    • Venue & Hotels
    • Bring your Team
    • Testimonials
    • Why Attend
    • EuroSTAR Awards
    • Academic Discount
  • Partner
    • Download our EXPO Brochure
    • Partner Opportunities
    • Partner Testimonials
  • About
    • About EuroSTAR
    • FAQ
    • EuroSTAR Timeline
    • EuroSTAR Blog
    • Supporting Organisations
    • Contact Us
  • Book Now

Software Testing

The Silver Bullet for Testing at Scale

August 21, 2023 by Lauren Payne

Thanks to Testory for providing us with this blog post.

Testing has always been a bottleneck in the development process. Since product teams often sacrifice time spent testing, the workload testers face ebbs and flows.

Your company’s testers most likely know what it’s like to work weekends and evenings when there’s a release coming up. At points like those, they generally have to take on low-level work to make sure they check everything and deliver a high-quality product. But that overworks them and leads to burnout.

Product teams often think about the silver bullet: how do you scale testing (increase capacity) instantly without just throwing money at the problem?

Before we answer that question, however, we should take a step back and look at the big picture. What challenges are inherent to testing?

Testing requirements by role

CTOProduct managerHead of testing
Faster time to marketYesYes–
Budget optimizationYesYes–
Product Quality for CustomersYesYesYes
Peak loads––Yes
Routine tasks––Yes
Variety of testing enviroments––Yes

Every role has its own problems. How do you solve them all at the same time?

A few years ago, we took a systematic approach to testing challenges, eventually coming up with a product for the largest IT company in our region. The solution married a variety of ML and other algorithms with traditional IT tools (Tracker, Wiki, TMS) and thousands of performers scattered across different time zones. That eliminated the bottleneck. With a dozen product teams online, they could scale testing or remove it altogether based on need.

On the one hand, we’re constantly improving our algorithms to give better feedback faster. On the other, our automated system selects professional testers who guarantee that same great result.

Another advantage our system offers is that it stands up well to load spikes around the clock rather than just during regular working hours.

Let’s look at an example. In February 2023, a large customer handed Testory a process that included 2240 hours of work, 1321 of which were outside business hours.

As you can see on the graph, the load placed on testers was anything but even. There are a thousand reasons why that could be. Some peaks outpaced the capacity of a full-time team working regular hours, though expanding the team would have resulted in team members sitting around the rest of the time.

All that makes sense on the graph. The red line represents hours, with eight full-time employees sufficient to cover the total of 65. As you can see on the graph, the load was more frequently heavier, meaning that team of eight wouldn’t be up to the task, though there were also times were they wouldn’t have had enough work.

How does it work?

The customer embeds crowd testing in their development pipeline, calling the process from their TMS as needed and running regress testing in our product with external testers.

When they submit work for crowd testing, our algorithms scour our pool to select the best performers in terms of knowledge, speed, and availability, then distributing tasks so we can complete a thorough product test in the shortest possible time. We then double-check the result, compile a report, and send the report to the customer. That’s how we fit N hours of work into N/X hours.

The customer can scale up testing whenever they want, then scaling back and paying nothing when they don’t have work to do. It’s an on-demand service.

Performers enjoy an endless stream of work that’s perfect for their skill set in addition to some that pushes them to learn and grow. For our part, we offer testers special skill- and knowledge-based courses, stable payment that depends on how many tasks they complete, and the opportunity to work from anywhere in the world.

What’s the bottom line?

We free up resources our clients can rededicate toward interesting and higher-risk work, help out with peak loads, and streamline costs:

How can you get that for yourself?

Testory is a separate process and product born to help large companies. It’s for anyone trying to quickly deliver IT products that solve user problems. If you’re interested in leveraging our experience, get in touch, and we’ll build a roadmap for you.

Author

Mary Zakharova

Mary has been working with crowdtesting products for 6 years. She started her career as a community manager in a testers’ network.

In recent years, Mary has been in charge of the Testory product

Testory is an EXPO Exhibitor partner at EuroSTAR 2023

Filed Under: Software Testing, Uncategorized Tagged With: 2023, EuroSTAR Conference

How to Solve Your Recruitment Needs for Software Testers?

August 16, 2023 by Lauren Payne

Thanks to Talent2Test for providing us with this blog post.

The testing market is currently facing the same challenges as almost the entire IT industry. A lot of new projects are starting up, the challenge lies in finding the right profiles, with the right education for the right roles. Double match, win-win call it as you want. Often supplier and customer are fighting to get the upper hand in “winning the deal”: the best price, the best consultant,… and face each other’s agenda’s. These agenda’s prevent a smooth collaboration because one of both parties wants to win the deal.

From our years of experience in guiding and helping companies in their digital journeys, we developed a way of thinking that leads to a more profound collaboration. By noticing that, making a random match between request and offer is like flipping a coin, we decided to take the matter in our own hands and create a platform where both parties can meet each other and create a real win-win.

Our Answer to This Scarcity?

15 years ago, we started organizing classes for our customers. Throughout the years we started finetuning this setup and came to a format where the customer, the supplier and the consultant find the right way to go. Before the start we define the needs of the customer, what kind of profile do they need, which technologies do they prefer? Wat is the location of their offices? Based on this information, we start looking for possible candidates in collaboration with our brand “Cookie Crunchers”. The Cookies talk to the Juniors consultant’s and make a careful selection. The consultant does interviews with us as a supplier and with the customer to define if there is a match. We start the training, which is also a training based on the needs of the customer e.g. emphasis on manual testing, test automation…

Meet the friendly Carole, 23 years old, who decided this year: I want to delve into software testing. Today she is making waves as a software tester. How she achieved that, she is happy to tell!

A Leap of Faith

“During my studies Media & Entertainment I came into contact with a website for programming and front-end development, and this is where my love for IT arose. But after three years of studying, I just wanted to start working and gain experience on the job. On the advice of my aunt, I took the plunge and started applying for a job in IT without any degree or experience.”

Mission Accomplished

“Within two months I was allowed to start at a company as a functional software tester. I didn’t know anything about it, but I learned a lot in the process. Soon I felt like I could handle even more of a challenge. Quite coincidentally, I received a LinkedIn message from Merijn from Talent2Test: was I interested in a Software Testing Class? That’s how the ball started rolling.”

Software Tester of the Day

“After the class, I was able to start very quickly with a Talent2Test customer. I’m the only tester on the team so that takes a lot of responsibilities with it. It is a large, international company and there are many career opportunities. There is a lot of variation within IT and I speak English every day. I also have a very nice team that I can always turn to. When I started the Testing Class, I really wanted to go to automation, because I thought I had already seen the functional, but with my employer I really noticed that there are a lot of new functionalities and more involved.”

Career Boost

“I experienced the process at Cookie Crunchers very positively. The regular contact, follow-up and support; I had a really good feeling about it. There are also plenty of opportunities to follow training courses and take your knowledge to the next level. By starting at Cookie Crunchers, I now have so many more options and I’m glad I committed. The world of testing has opened up completely.”

Talent2Test – When Quality Matters

Want to know more about our junior classes? Are you looking for Software Tester? Or maybe you are a Software Tester looking for a new challenges or ways to improve your knowledge? Get in touch with Stijn, account manager.

a: Houtdok-Noordkaai 12, 2030 Antwerpen
m: +32 0497 64 10 25
e: [email protected]
w: www.talent2test.be

Author

Talent2Test

When quality matters, Talent2Test is your partner for software testing.

Talent2Test trains & supports and offer driven test engineers to help companies achieve the quality they need.

We have the flexibility of a local player, based in Antwerp. But also the ability to execute in an international environment. This due to the fact that we are part of the Nash Squared group.  

Talent2Test is an EXPO Exhibitor at EuroSTAR 2023

Filed Under: People, Software Testing Tagged With: 2023, EuroSTAR Conference

We’ve got the Stage – You’ve got the Story

July 17, 2023 by Lauren Payne

The 2024 EuroSTAR Software Testing Conference is going to Stockholm, Sweden.

If you’ve ever wanted to speak at EuroSTAR and share your story on Europe’s largest stage, the Call for Speakers is open until 17th September.

Now is the time to start thinking about what you’d like to share. What experiences will help others in the room. Perhaps it’s something that didn’t work at first but then you found a solution. It might be technical, or it might core skills.

EuroSTAR 2024 Programme Chair, Michael Bolton, is inviting you to explore the theme, ‘What Are We Doing Here?’ – it’s a wide-open question, with lots of possible interpretations and related questions.

Talk Type

We’ll share more on these later but for now, there will be three main types of talks:

  • Keynote – 60mins (45mins talk + 15mins Q&A)
  • Tutorials/Workshops – Full-day 7 hours OR Half-day 3.5 hours incl breaks
  • Track Talks – 60mins (40mins talk + 20mins valuable discussion)

Who?

Calling all testing enthusiasts and software quality advocates – whether you’re a veteran, or new to testing – to share your expertise, successes (and failures) with your peers; and spark new learnings, lively discussions, and lots of inspiration.

Think about what engages you in your work, engrosses you in testing, challenges you’ve faced, or new ideas you’ve sparked? Get in front of a global audience, raise your profile, and get involved with a friendly community of testers.

Here’s everything you need to know about taking the first step on to the EuroSTAR stage.

We invite speakers of all levels to submit their talk proposals and take the biggest stage in testing!

What Do I Need To Submit?

A clear title, a compelling abstract and 3 possible learnings that attendees will take from your talk – this is the main part of your submission. We’ll ask you to add in your contact details and tick some category boxes but your title, talk outline & key learnings are the key focus.

Topics for EuroSTAR 2024

Michael is calling for stories about testers’ experiences in testing work. At EuroSTAR 2024, we embrace diversity and value a wide range of perspectives. We’re most eager to hear stories about how you…

  • learned about products
  • recognised, investigated, and reported bugs
  • analysed and investigated risk
  • invented, developed, or applied tools
  • developed and applied a new useful skill
  • communicated with and reported to your clients
  • established, explained, defended, or elevated the testing role
  • Created or fostered testing or dev groups
  • recruited and trained people
  • made crucial mistakes and learned from them
START Your Submission

Mark Your Calendar

Here are some essential dates to keep in mind:

  • Call for Speakers Deadline: 17 September 2023
  • Speaker Selection Notification: Late November 2023
  • EuroSTAR Conference: 11-14 June 2024 in Sweden

If you’re feeling inspired, check out the full Call for Speakers details EuroSTAR attracts speakers from all over the world and we can get over 450 submissions. Each year, members of the EuroSTAR community give their time to assess each submission and their ratings help our Programme Committee select the most engaging and relevant talks. If you would like help writing a proposal see this handy submissions guide and you can reach out to us at any time.

EuroSTAR 2024 promises to be an extraordinary experience for both speakers and attendees. So, submit your talk proposal before 17 September 2023 and let’s come together in the beautiful city of Stockholm next June. Together we’ll make EuroSTAR 2024 an unforgettable celebration of software testing!

Filed Under: EuroSTAR Conference, Software Testing, Uncategorized Tagged With: EuroSTAR Conference

5 Steps to help build your load testing strategy

July 10, 2023 by Lauren Payne

Thanks to Gatling for providing us with this blog post.

You might have already started load testing, which is awesome! But if you haven’t, and you’re wondering when, where and how to start the answers are all here for you. To help you get set up we’re going to give you a few tips and tricks to build your load testing strategy and make sure that you’re set for success. Ready to dive in? Read on!

Know Your User

The most important part of load testing is knowing your user but more specifically what you need to know are the answers to a few key questions.

How are your users using your site/application? 

Most enterprises have an idea of how they’d like their users to use their site or products but for many how they’re actually using it and the journeys they take when they’re using it are a bit of a mystery. By using different tracking software such as Mixpanel or Amplitude though you can get a very detailed idea of what journeys your users are taking on your site and craft simulations to match and replicate this.

Understanding Your Traffic

Crafting great user journeys is the first step in building a scenario. Understanding your traffic though will help you decide what kind of tests you need to create. By using tools like Google analytics, Google Search Console, SEM rush or just monitoring your server usage you should be able to get an idea of what kind of traffic you’re receiving and how you’re receiving it. Are you getting sudden surges of traffic? Run a stress test! Are you getting long durations of constant traffic? Run a soak test. For every traffic scenario you can run a battery of different tests to ensure that your website is resilient enough to withstand the traffic it’s receiving. To learn more about the different kinds of load tests you can run and get an idea about what might work best for you check out our post here.

Continuous Integration

You’ve built your tests and run them, you’re doing great! However, most websites and applications are constantly changing and upgrading. How can you be sure that the changes you’re making aren’t going to change the performance of your project? By introducing load testing into your CI/CD project. We wrote a detailed post on the benefit of using Gatling Enterprise Cloud to integrate load testing into your CI/CD process. Gatling’s Enterprise version allows you to integrate with almost any CI/CD software, whether you’re using one of our dedicated integrations or using our CI Script to create your own.

Plan For The Unexpected

One of the great things about load testing is its ability to prepare you for any eventuality. You might not have thousands of users hitting your application today but by creating tests, and running them you can be sure that if it does happen you’re prepared. So when creating your testing strategy and examining your traffic it’s important not to just consider what is happening right now but also what could happen. What’s the worst/best case scenario? Are you prepared? Make sure by testing and you’ll know that whatever happens you’ll be ready.

By following these tips, this will help ensure that your websites and applications are able to handle the traffic and workloads that they will encounter in the real world, and it will help prevent performance issues that could impact the user experience. 

LINKS
or just monitoring your server https://hubs.ly/Q01DYDSv0 
sudden surges of traffic?https://hubs.ly/Q01DYH_L0 
out our post herehttps://hubs.ly/Q01DYK9B0 
Gatling Enterprise Cloud to integrate load testing into your CI/CD process.https://hubs.ly/Q01DYL720 

Author

Pete Dutka, Customer Success Manager, Gatling.

Gatling Enterprise provides advanced features to help you get ahead of downtime and technical issues related to your website traffic. Our advanced reports allow you to dive into the details and discover your application’s limits and performance bottlenecks. We offer both on-premise and SaaS solutions to meet your business needs, whatever they may be.

Gatling is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Software Testing Tagged With: 2023, EuroSTAR Conference

Did We Do the Right Tests?

June 26, 2023 by Lauren Payne

Experiences with Test Gap Analysis in Practice

Thanks to CQSE for providing us with this blog post.

Most errors occur in code that has been changed lately (e.g., since the last release of a software system) [1,2]. This is of little surprise to practitioners, but how do we ensure that our tests cover all such changes, in order to catch as many of these defects as possible?

Do Tests Cover Code Changes in Practice?

In order to better understand to which degree tests actually cover changes made to a software system, we tracked development and testing activity on an enterprise information system, comprising of about 340k lines of C# code, over a period of 14 months, corresponding to two consecutive releases [1].

Through static code analysis, we determined that for each of these releases, about 15% of the source code were either newly developed or changed. Using a profiler, we recorded the code coverage of all testing activities, including both automated and manual tests. This data showed that approximately half of the changes went into production untested – despite a systematically planned and executed testing process.

To quantify the consequences of untested changes for users of the software, we then reviewed all errors reported in the months following the releases and traced them back to their root causes in the code. We found that changed, untested code contains five times more errors than unchanged code (and also more errors than changed and tested code).

This illustrates that, in practice, untested changes very frequently reach production and that they cause the majority of field errors. We may, thus, systematically improve test quality, if we manage to test changes more reliably.

Why Do Changes Escape Testing?

The amount of untested production code we found in our study actually surprised us, when we originally conducted this study. Therefore, we wanted to understand why this many changes escape testing.

We found that the cause of these untested changes is – to the contrary of what you may assume – not a lack of discipline or commitment on the testers’ part, but rather the fact that it is extremely hard to reliably identify changed code manually, when testing large systems.

Testers often rely on the description of individual issues from their issue tracker (e.g., from Jira or Azure DevOps Boards), in order to decide whether some change has been sufficiently tested. This works well for changes made for functional reasons, because the issues describe how the functionality is supposed to change and it is relatively easy to see which functionality a test covers.

However, there are two reasons why issue trackers are not suitable sources of information in consistently finding changes:

  • First, many changes are technically motivated, for example, clean-up operations or adaptations to new versions of libraries or interfaces to external systems. Respective issue descriptions do not clarify which functional test cases make use of the resulting changes.
  • Second, and more importantly, the issue tracker often simply does not document all important changes, be it because someone forgot or did not find the time to update the issue description or because someone made changes they were not supposed to make, e.g., due to a policy that is currently in place.

Thus, we need a more reliable source to determine what has been changed. Only then, we can reason about whether these changes have been sufficiently tested.

Test Gap Analysis to the Rescue!

Test Gap Analysis is an approach that combines static analysis and dynamic analysis to identify changed-but-untested code.

First, static code analysis compares the current state of the source code of the System under Test to that of the previous release in order to determine new and changed code areas. In doing so, the analysis filters out refactorings, which do not modify the behavior of the source code (e.g., changes to documentation, renaming of methods or moving of code) and, thus, cannot cause new errors. The remaining code changes lead to a change in the behavior of the system.

For the enterprise information system from before, all changes for one of the releases we analyzed are depicted on the following tree map. Each rectangle represents a method in the source code and the size of the rectangle corresponds to the method’s length in lines of source code. We distinguish unchanged methods (gray), from new methods (red) and modified methods (orange).

Figure 1: A treemap showing all changes made to an enterprise information system for a single release. About 15% of the code was added (red) or modified (orange), while the rest remained unchanged (gray).

Second, dynamic analysis captures code coverage (usually through a coverage profiler). The crucial factor here is that all tests are recorded, across all test stages and regardless of whether they are automated or manually executed.

We use the same tree map as above, to visualize the aggregated code coverage at the end of the test phase. This time, we distinguish between methods that were executed by at least one test (green) and methods that were not (gray).

Figure 2: A treemap showing the aggregated code coverage of all tests (manual and automated) performed on an enterprise information system. Methods that were executed by at least one test (green) and methods that were not (gray).

Third, Test Gap Analysis detects untested changes by combining the results of the static and dynamic analyses. Again, we use our tree map to visualize the results, distinguishing methods that remain unchanged (gray) from changed-and-tested methods (green), untested new methods (red) and untested changed methods (orange).

Figure 3: A treemap showing the results of a Test Gap Analysis on an enterprise information system. A large part of the code remained unchanged (gray), while about 15% changed (in color). Of the changes, about half was tested (green), while the other half remained untested (red and orange) – despite a systematically planned and executed testing process.

It is plain to see that whole components containing new or changed code were not executed by even a single test in the testing process. No errors contained in this area can have been found in the tests!

Using Test Gap Analysis

Test Gap Analysis is useful when executed regularly, for example, every night, to gain insights each morning into the executed tests and changes made up until the previous evening. Each day, an updated Test Gap treemap, e.g., on a dashboard, then helps test managers decide whether further test cases are necessary to run through the remaining untested changes. This creates an ongoing feedback loop to steer the testing efforts and make informed decisions.

Figure 4: Test Gap Analysis gives continuous feedback to help test managers steer the testing efforts.

Which Projects Benefit from Test Gap Analysis?

We have used Test Gap Analysis on a wide range of different projects: from enterprise information systems to embedded software, from C/C++ to Java, C#, Python and even SAP ABAP. Factors that affect the complexity of the introduction are, among others:

  • Execution environment. Virtual machines (e.g., Java, C#, ABAP) simplify the collection of test coverage data.
  • Architecture. The test-coverage data for a server-based application has to be collected from fewer machines than that for a fat-client application, for example.
  • Testing process. Clearly defined test phases and environments facilitate planning and monitoring.

Using Test Gap Analysis During Hotfix Testing

The objectives of hotfix tests are to ensure that the fixed error does not re-occur and that no new errors have been introduced. To achieve the latter, we should at least ensure we tested all changes made in the course of the hotfix. Usually, there is very little time to achieve this.

With Test Gap Analysis, we may define the release state (before the hotfix) as the reference version and detect all changes made due to the hotfix (for example, on a dedicated branch). We then determine whether all changes were actually tested during confirmation testing. A Test Gap tree map immediately shows whether there are any untested changes left.

Figure 5a: Changes made during a hotfix (in color).
Figure 5a: Changes made during a hotfix (in color).
Figure 5b: Remaining untested changes (orange & red) and tested changes (green).

In our experience, Test Gap Analysis specifically helps avoid new errors that are introduced through hotfix changes.

Using Test Gap Analysis During a Release Test

For this scenario, we define a release test as the test phase prior to a major release, which usually involves both testing newly implemented functionality and executing regression tests. Often, this involves different kinds of tests on multiple test stages.

Figure 6: Split of development and release-test phases.

In the introduction to Test Gap Analysis above, we’ve looked at the results of running Test Gap Analysis at the end of a release test of an enterprise information system. These results revealed glaring gaps in the coverage of changes, after a test phase without using Test Gap Analysis to guide the testing efforts.

From that point onwards, Test Gap Analysis became an integral part of the testing process and was executed regularly during subsequent release tests. The following is a snapshot of the Test Gap Analysis during a later release test. It is plain to see that it contains much fewer Test Gaps.

Figure 7a: Test Gaps after a release test without Test Gap Analysis.
Figure 7b: Test Gaps after a release test guided by Test Gap Analysis.

If testing happens in multiple environments simultaneously, we may run Test Gap Analysis for each individual environment separately. And at the same time, we may run Test Gap Analysis globally, to assess our combined testing efforts. The following example illustrates this for a scenario with three test environments:

  • Test is the environment in which testers carry out manual test cases.
  • Dev is the environment where automated test cases are executed.
  • UAT is the User Acceptance Test environment, where end users carry out exploratory tests.
  • All combines the data of all three test environments.
Figure 8: Results of Test Gap Analysis by test environment and aggregated over all environments.

We observed that, in many cases, some Test Gaps are accepted, for example, when the corresponding source code is not yet reachable via the user interface. The goal of using Test Gap Analysis is not to test every single change at all cost. The key is that we can make conscious and well-founded decisions with predictable consequences about what to test.

In our experience, Test Gap Analysis significantly reduces the amount of untested changes that reach production. In a study with one of our customers, we found that this reduces the number of errors in the field by as much as 50%.

Using Test Gap Analysis Alongside Iterative Development

Today, fewer and fewer teams work with dedicated release tests, like in the previous scenario. Instead, issues from their issue trackers move into the focus of test planning, to steer testing efforts alongside iterative development.

In this scenario, testers are responsible to test individual issues in a timely manner after development finishes. As a result, development and testing interleave and dedicated test phases become obsolete or much shorter.

Figure 9: Development and test phases in iterative development processes.

At the same time, it becomes even harder to keep an eye on all changes, because much of the work typically happens in isolation, e.g., on dedicated feature branches, and gets integrated into the release branch only on very short notice. All the more, we need a systematic approach to keep track of which changes have been tested and in which test environments.

Fortunately, we may run Test Gap Analysis also on the changes made in the context of individual issues. All we need to do is single out the changes that happened in the context of any particular issue, which is straightforward, e.g., if all changes happen on a dedicated feature branch or if developers annotate changes with the corresponding issue numbers when committing them to the version control system. Once we grouped the changes by issue, we simply run Test Gap Analysis for each of them.

Figure 10: Overview of Issue Test Gaps for the issues in the current development iteration.

Limitations of Test Gap Analysis

Like any analysis method, Test Gap Analysis has its limitations and your knowledge of them is crucial for making the best use of the analysis.

One of the limitations of Test Gap Analysis are changes that are made on the configuration level without changing the code itself. These changes remain hidden from the analysis.

Another limitation of Test Gap Analysis is the significance of processed code. Test Gap Analysis evaluates which code was executed during the test. It cannot figure out how thoroughly the code was tested. This potentially leads to undetected errors despite the analysis depicting the executed code as “green”. This effect increases with the coarseness of the measurement of code coverage. However, the reverse is as simple as it is true: red and orange code was not executed in tests, thus, no contained errors can have been found.

Our experience in practice shows that the gaps brought to light when using Test Gap Analysis are usually so large that we gain substantial insights into weaknesses in the testing process. With respect to these large gaps, the limitations mentioned above are insignificant.

Further Information

Test Gap Analysis may greatly enhance the effectiveness of testing processes. If you would like to learn more about how Test Gap Analysis works in our analysis platform Team scale, the first tool that offered Test Gap Analysis and, to date, the only tool providing Test Gap tree maps, as you have seen them above, check out our website on Test Gap Analysis or join our next workshop on the topic (online & free)!

References

[1] Sebastian Eder, Benedikt Hauptmann, Maximilian Junker, Elmar Juergens, Rudolf Vaas, and Karl-Heinz Prommer. Did we test our changes? assessing alignment between tests and development in practice.
In Proceedings of the Eighth International Workshop on Automation of Software Test (AST’13), 2013.

https://www.cqse.eu/publications/2013-did-we-test-our-changes-assessing-alignment-between-tests-and-development-in-practice.pdf

[2] N. Nagappan, Th. Ball, Use of relative code churn measures to predict system defect density, in: Proc. of the 27. Int. Conf. on Software Engineering (ICSE) 2005

Authors

Dr Elmar Juergen EuroSTAR 2023 Speaker

Dr. Elmar Jürgens

([email protected]) is founder of CQSE GmbH and consultant for software quality. He studied computer science at the Technische Universität München and Universidad Carlos III de Madrid and received a PhD in software engineering.

Dr. Dennis Pagano

([email protected]) is consultant for software and systems engineering at CQSE. He studied computer science at Technische Universität München and received a PhD in software engineering from Technische Universität München. He holds two patents.

Dr. Sven Amann

([email protected]) is a consultant of CQSE GmbH for software quality. He studied computer science at the Technische Universität Darmstadt (Germany) and the Pontifícia Universidade Católica do Rio de Janeiro (Brazil). He received his PhD in software technology from Technische Universität Darmstadt.

CQSE is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp.

Filed Under: Software Testing Tagged With: 2023, EuroSTAR Conference

Why Test Reporting Should be a Top Priority in Your Software Development Process

June 21, 2023 by Lauren Payne

Thanks to b.ignited for providing us with this blog post.

In the world of software development, testing is an essential part of the process. It is through testing that we can ensure that the software being developed is fit for use, meets requirements, and is ready for release. However, there are some situations where test reporting does not reach management, which is a problem. Do you wonder why this might happen? What are the consequences? Because believe me, there are consequences! And most importantly, what can be done to avoid all of this?
To understand the importance of test reporting, it is important to understand what a good test report consists of. The report summarizes the results and findings of a testing process. It provides a comprehensive view of the testing activities, including the objectives, scope, and methodology of the testing, as well as the test cases, test scripts, and test data used. It serves as a formal record of the testing activities and provides stakeholders with a clear understanding of the quality of the product or system being tested. It is an important tool for decision-making, as it can help stakeholders determine whether the product or system is ready for release or further testing is required.

Why does test reporting not reach management?

There are a few reasons why test reporting might not be available to management.
One reason is that the testing team does not have the necessary resources to produce reports. This could be due to a lack of personnel, time or funding. Another reason could be that they do not recognize the value of producing reports. They may believe that their work speaks for itself, and that there is no need to provide additional documentation.


Another reason could be that the development team is focused on meeting deadlines and releasing software quickly. In this case, the testing team may not have enough time to produce reports and meet their other responsibilities.
Or it could be that the team is not aware of the importance of test reporting to management. They may not realize that management needs this information to make informed decisions about the software development process.

What are the consequences of not reporting?

There are several consequences of not reporting test results to management. One of the most significant consequences is that management will not have a clear view of the quality of the software being developed. Without this information, they may make decisions that are not in the best interest of the company. For example: releasing software that has not been adequately tested leads to customer complaints, negative reviews and even legal action.


Another consequence of not reporting is that the testing team may not receive the recognition they deserve for their hard work. When management is not aware of the effort of the testers, they may not appreciate the value of their work, leading to lower morale and decreased job satisfaction.

Not reporting test results can lead to a breakdown in communication between the testers and the other members of the development team. This can make it more difficult to identify and fix bugs, leading to longer development times and higher costs.

How can you avoid not reporting test results?

There are 4 simple steps:

  • The first step is to ensure that the testing team has the necessary resources to produce reports. This might involve hiring additional personnel, providing more time for reporting, or increasing funding for testing activities.
  • The second step is to educate the testers about the importance of test reporting to management. By explaining how this information is used, the testing team will be more motivated to produce reports.
  • The third step is to make sure that reporting is integrated into the software development process. This might involve using automated tools to generate reports or creating templates that make it easy for the testing team to produce reports quickly.
  • And the fourth and final step is to ensure that there is open communication between the testers and the other members of the development team. By sharing test results and collaborating on solutions, the development process can be more efficient and effective.

Test-Automation-as-a-Service: your test reporting solution

At b.ignited, we are convinced that there is yet another solution to ensure that test reporting is always up-to-date, namely using ‘b.ignition’. b.ignition is an in-house developed tool, with an underlying cloud architecture to provide test reporting. Users can log in via a portal and thus view and compare all information of current and historical test results. There is always an overview available of the test results status across projects. If necessary, a new test run can be started from the same portal, and the results are immediately included in the overview. b.ignition is set up in such a way that the customer can choose between a private or a public cloud, depending on the desired data security.

Understanding the value of test reporting

In conclusion, not reporting test results to management can have significant consequences for the software development process. By understanding why this might happen and taking steps to avoid it, you can ensure that the software being developed is of the highest quality and meets the needs of the customer. It is essential to recognize the value of test reporting to management and to make it a priority in the software development process.

Author

Patrick-Van-Ingelgem

Patrick Van Ingelgem, Managing Partner at b.ignited

Founded the company in 2018 after several years of experience in Test automation, -coordination and -management. He motivates his colleagues from b.ignited to always be on top of technology, and strongly believes in the power of knowledge and information. That’s why the topic on Test reporting is so important for him.

 b.ignited is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp.

Filed Under: Software Testing Tagged With: 2023, EuroSTAR Conference

Efficient Software Testing in 2023: Trends, AI Collaboration and Tools

May 31, 2023 by Lauren Payne

Thanks to JetBrains Aqua for providing us with this blog post.

In the rapidly evolving field of software development, efficient software testing has emerged as a critical component in the quality assurance process. As we navigate through 2023, several prominent trends are shaping the landscape of software testing, with artificial intelligence (AI) taking center stage. We’ll delve into the current state of software testing, focusing on the latest trends, the increasing collaboration with AI, and the most innovative tools.

Test Automation Trends

Being aware of QA trends is critical. By staying up to date on the latest developments and practices in quality assurance, professionals can adapt their approaches to meet evolving industry standards. Based on the World Quality Report by Capgemini & Sogeti, and The State of Testing by PractiTest, popular QA trends currently include:

  • Test Automation: Increasing adoption for efficient and comprehensive testing.
  • Shift-Left and Shift-Right Testing: Early testing and testing in production environments for improved quality.
  • Agile and DevOps Practices: Integrating testing in Agile workflows and embracing DevOps principles.
  • AI and Machine Learning: Utilizing AI/ML for intelligent test automation and predictive analytics.
  • Continuous Testing: Seamless and comprehensive testing throughout the software delivery process.
  • Cloud-Based Testing: Leveraging cloud computing for scalable and cost-effective testing environments.
  • Robotic Process Automation (RPA): Automating repetitive testing tasks and processes to enhance efficiency and accuracy.

QA and AI Collaboration

It’s no secret that AI is transforming our lives, and ChatGPT’s collaboration can automate a substantial portion of QA routines. We’ve compiled a list of helpful prompts to streamline your testing process and save time.

Test Case Generation

Here are some prompts to assist in generating test cases using AI:

  • “Generate test cases for {function_name} considering all possible input scenarios.”
  • “Create a set of boundary test cases for {module_name} to validate edge cases.”
  • “Design test cases to verify the integration of {component_A} and {component_B}.”
  • “Construct test cases for {feature_name} to validate its response under different conditions.”
  • “Produce test cases to assess the performance of {API_name} with varying loads.”
  • “Develop test cases to check the error handling and exceptions in {class_name}.”

Feel free to modify these prompts to better suit your specific testing requirements.

Example
We asked for a test case to be generated for a registration process with specific fields: First Name, Last Name, Address, and City.

AI provided a test case named “User Registration” for the scenario where a user attempts to register with valid inputs for the required fields. The test case includes preconditions, test steps, test data, and the expected result.

Test Code Generation

In the same way, you can create automated tests for web pages and their test scenarios.

To enhance the relevance of the generated code, it is important to leverage your expertise in test automation. We recommend studying the tutorial and using appropriate tools, such as JetBrains Aqua, to write your tests that provide tangible examples of automatically generating UI tests for web pages.

Progressive Tools

Using advanced tools for test automation is essential because they enhance efficiency by streamlining the testing process and providing features like test code generation and code insights. These tools also promote scalability, allowing for the management and execution of many tests as complex software systems grow.

UI Test Automation

To efficiently explore a web page and identify available locators:

  • Open the desired page.
  • iInteract with the web elements by clicking on them.
  • Add the generated code to your Page Object.

This approach allows for a systematic and effective way of discovering and incorporating locators into your test automation framework.

Code Insights

To efficiently search for available locators based on substrings or attributes, you can leverage autocompletion functionality provided by the JetBrains Aqua IDE or plugin.

In cases where you don’t remember the location to which a locator leads, you can navigate seamlessly between the web element and the corresponding source code. This allows you to quickly locate and understand the context of the locator, making it easier to maintain and modify your test automation scripts. This flexibility facilitates efficient troubleshooting and enhances the overall development experience.

Test Case As A Code

The Test Case As A Code approach is valuable for integrating manual testing and test automation. Creating test cases alongside the code enables close collaboration between manual testers and automation engineers. New test cases can be easily attached to their corresponding automation tests and removed once automated. Synchronization between manual and automated tests to ensure consistency and accuracy is a challenge that does not need to be addressed. Additionally, leveraging version control systems (VCS) offers additional benefits such as versioning, collaboration, and traceability, enhancing the overall test development process.

Stay Tuned

The industry’s rapid development is exciting, and we are proud to be a part of this growth. We have created JetBrains Aqua, an IDE specifically designed for test automation. With Aqua, we aim to provide a cutting-edge solution that empowers testers and QA professionals. Stay tuned for more updates as we continue to innovate and contribute to the dynamic test automation field!

Author

Alexandra Psheborovskaya, QA Lead and Product Manager at JetBrains

Alexandra works as a SDET and a Product Manager on the Aqua team at JetBrains. She shares her knowledge with others by mentoring QA colleagues, such as in Women In Tech programs, supporting women in testing as a Women Techmakers Ambassador, hosting a quality podcast, and speaking at professional conferences.

JetBrains is an EXPO Platinum partner at EuroSTAR 2023, join us in Antwerp

Filed Under: Software Testing, Uncategorized Tagged With: 2023, EuroSTAR Conference

Reduce e-waste by using device farms

May 24, 2023 by Lauren Payne

Thanks to 42Gears for providing us with this blog post.

Did you know that around 50 million tonnes of e-waste is generated every year across the globe, and only 20% is recycled through organised and regulated channels? “If e-waste continues to accumulate in the future, it may pose a serious threat to the environment, society, and economy.”

What is e-waste?

E-waste refers to any electrical or electronic equipment that is unwanted, not working, and has been discarded by its owner as waste without the intent of re-use. E-waste includes a wide array of products. As might be expected, screens, monitors, laptops, tablets, smartphones, computers, printers, telephones, and mobile phones can all be e-waste. However, e-waste can also include household or business items that have electrical components with power or battery supply. For example, temperature exchange equipment such as refrigerators, freezers, and air conditioners can become e-waste.

E-waste contains toxic components such as mercury, cadmium, lead, polybrominated flame retardants, lithium, and barium. Unfortunately, these components can be very dangerous to human health. They can adversely affect the respiratory system, lungs, kidneys, and brain.

Trends driving e-waste creation

Perhaps the biggest reason for increased e-waste is new users joining the Internet for the first time. Internet access is expanding worldwide to more people than ever before, driving demand for devices that will eventually become e-waste. According to Internet Growth Statistics, 69% of the world’s population used internet in 2022, and going by the trend, two-thirds of the world’s population will be online by the end of 2023.

Another major culprit is planned obsolescence – the practice of intentionally having devices become outdated so users will need to purchase new ones. Global increases in disposable income mean that many consumers are eager to replace older devices with new ones. Plus, many business apps and services are designed to work best on powerful, recently-made devices. In order to stay up-to-date, companies and workers must purchase new devices and discard old ones. Because of these trends, consumers and businesses are constantly getting rid of older, slower devices, creating e-waste.

What companies are doing to reduce e-waste

It is important that businesses recognize the role they play in generating e-waste. Several major companies have begun taking steps to reduce e-waste, and many others will likely follow suit.

For example, beginning with the 2020’s iPhone 12 line, Apple chose not to include headphones and chargers with its phones. This can help to reduce unnecessary EEE (electrical and electronic equipment), according to Teresa Domenech of University College London’s Institute for Sustainable Resources. Domenech also notes this initiative will also reduce environmental damage because Apple will need to extract fewer primary raw materials, perform less manufacturing, and ship fewer products overall.

Owing to the fact that discarded charges are contributing to 11,000 metric tons of e-waste in Europe annually, European Union Lawmakers have made it mandatory for mobile manufacturers to provide universal chargers.

Another way of reducing e-waste is recycling old electronics. LG’s India subdivision operates a network of 40 recyclers in India. In order to maximize the number of people participating in the program, LG picks up e-waste directly from the user’s home. Between 2017 and 2020, LG collected and recycled almost 100,000 metric tons of e-waste in India.

Of course, many technology enterprises do not manufacture their own devices, so they cannot use these techniques to reduce e-waste. Still, there are other ways to reduce e-waste; for example, investing in device farms. Let’s explore how this works.

How device farms help reduce e-waste

While the impact of e-waste is alarming across the globe, companies are researching new ways to reduce e-waste and prevent health and safety hazards. As a result, many companies have begun implementing new methods and processes that combat the rise of e-waste.

Using a device farm is one of the most innovative ways through which companies can reduce e-waste. Often, companies require large device inventories consisting of multiple devices and multiple versions of the same device for their DevOps and QA teams. Unfortunately, these devices may not be accessible to all the team members; if someone needs to test an app on a device in the office, but that person works remotely, connecting to the device will be a challenge.

However, if these devices are enrolled into a device farm, anyone on DevOps and QA teams can easily access them remotely, no matter where they are located.

This is the key to reducing e-waste with a device farm. By purchasing a single device and connecting it to a device farm, a company can make that device accessible to anyone in the company worldwide. This substantially improves ROI on a given device and removes the need to purchase multiple devices for multiple offices.

Device farms can be of two types – public device farms, or private device farms. Public device farms are third-party platforms that allow businesses to access devices owned by a third party. Companies rent these devices for a particular time slot and pay accordingly.

On the other hand, a private device farm is owned and managed by the company itself. This setup empowers all approved company employees to access enrolled devices at any time from anywhere.

There are a few companies that help organizations to set up a private device farm. 42Gears is one of them. AstroFarm by 42Gears is a great tool that helps organizations to set up their private device farms. AstroFarm offers many benefits, allowing companies to get more value from the devices they already own, making devices available worldwide in real time, and providing global teams with an easier way to coordinate app development. For more information, please refer here.

Summary

E-waste has become a global challenge and needs to be addressed as soon as possible. While the abstract statistics are alarming, the real concerns are the growing environmental and health hazards associated with e-waste.

While consumers should work to reduce the e-waste they generate, enterprises are the biggest contributors to the problem of e-waste. As such, companies need to do their part to reduce e-waste by using better technology, processes, and products. Implementing AstroFarm by 42Gears can help you reduce e-waste by setting up your own device farm and getting the maximum ROI out of each device you purchase.

Author

42Gears

42Gears is a leader in enterprise mobility management, offering cutting-edge solutions designed to transform the digital workplace. Delivered from the cloud and on-premise, 42Gears products support all major mobile and desktop operating systems, enabling IT and DevOps teams to improve frontline workforce productivity as well as the efficiency of software development teams.

42Gears products are used by over 18000 customers across various industries in more than 115 countries and are available for purchase through a global partner network. For more information, please visit https://www.42gears.com

42Gears is an EXPO Gold partner at EuroSTAR 2023, join us in Antwerp

Filed Under: Software Testing Tagged With: 2023, e-waste, EuroSTAR Conference

  • Page 1
  • Page 2
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us