• Skip to main content
Year End Offer: Save 30% + Teams Save Up to an Extra 25%!

EuroSTAR Conference

Europe's Largest Quality Engineering Conference

  • Programme
    • Programme Committee
    • 2025 Programme
    • Community Hub
    • Awards
    • Social Events
    • Volunteer
  • Attend
    • Location
    • Highlights
    • Get Approval
    • Why Attend
    • Bring your Team
    • Testimonials
    • 2025 Photos
  • Sponsor
    • Sponsor Opportunities
    • Sponsor Testimonials
  • About
    • About Us
    • Our Timeline
    • FAQ
    • Blog
    • Organisations
    • Contact Us
  • Book Now

Expo

Business Process Automation for Software Customers – Needs and Challenges

May 29, 2024 by Lauren Payne

Often automated testing is seen as an act by the team that is creating the software. It is a task to ensure quality of the software before it is shipped. However there is an important usage of test automation that can take place AFTER the software is shipped. And it is done BY the CUSTOMER!

The Need:

Customer organizations which buy software don’t just use one particular software. For different aspects of their business, they use different software applications. These applications are integrated together in various workflows, so that they work in tandem to improve employee productivity. Teams in such organizations use these applications on a daily basis and their productivity would falter if any of the software applications fail to function as expected. It is a reality of the current times that each of these applications may be updated 2 to 3 times if not more times a year. And each software application may have its own release cycle/cadence. Any software update may disrupt the employee workflow and cause significant damage in terms of time and money for the company. Hence, organizations need to ensure that their workflows continue to work after any update to these software applications. Such task may be carried out by a business team or a QA team at the customer org.

Can this verification be automated? What are the challenges?

We regularly work with customers in the Automotive industry. These are the challenges faced by them.

Challenges Faced:

1) Software applications in business workflows may be built on different technology stacks. A desktop application may be used for designing automotive parts while a web based PLM application may be used for managing those designs. The interplay between these two applications would be important for the customer teams. Most automation tools do not support automation of multiple technologies.

2) Applications once shipped behave mostly as a black box. They need to be exercised mostly via the Graphical User Interface (GUI). Importantly, we need to mimic the end user’s usage pattern, so GUI becomes significant.

3) Teams that verify such functional continuity will be small and more focused on the business aspects than the automation aspects. They may not have technical know-how (or time) to automate such third party applications using traditional programmatic tools.

So teams look for an automation tool that:

1) Can work across technologies like desktop, web, java, SAP etc.

2) Is easy to use – preferably low or no code

3) Has good support. Since the team may not be very technical, a good support team ensures any edge cases can be handled correctly and quickly

Over the past few years, Sahi Pro has helped a lot of customers achieve such business process automation, especially in the Automotive industry. With the upcoming Sahi Pro v11, automation becomes even more easy because of the no code flowcharts interface. The Flowcharts interface makes visualizing and managing automation very accessible to non-technical testers and business users. Sahi Pro 11 Beta is currently available. Reach out to us to play with Sahi Pro 11 Beta to get a POC done on your automation needs.

Author

Narayan Raman, CEO

Narayan is the CEO and founder of Tyto Software. He is the author of open source Sahi and the architect of Sahi Pro.

Sahi Pro is an exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Test Automation Tagged With: 2024, EuroSTAR Conference, Expo

Autoethnographic research made me look at testing from a new angle

May 27, 2024 by Lauren Payne

Explore how autoethnographic research techniques overlap with testing behaviors and download a free template to try it yourself.

One of my favorite things about being a tester is the way testers find inspiration and techniques from outside of the testing world and integrate them into their work. In my graduate work I’ve been discovering the many ways that traditional research techniques overlap with testing behaviors, my favorite of which is autoethnographic research. 

What in the world is autoethnographic research you ask? Let’s start by looking at ethnography, the research methodology this technique grew out of. Ethnography or ethnographic research is a research method typically used in the social sciences in which the researcher embeds and immerses themselves in a group, community, or culture. In ethnography we acknowledge that researchers will always have a personal slant or bias and instead of attempting to remove bias, instead seeks to learn and empathize with the subjects of the study. Some of the ways we document our findings using this method include journaling, field notes, and narratives of our experiences and observations.

Autoethnography takes ethnographic research one step further and uses the researcher themselves as a subject in the study. Often used in the social sciences, the researcher interrogates their own feelings and experiences in a group, community, or culture. This technique is often used when a researcher is studying their own culture or community, or when they are conducting research on a study of only themselves. Autoethnographic research also uses journaling, field notes, and narratives to document observations during the study.

A couple of real life examples of this technique used in action are:

  • In James Osben’s A day in the life of a NHS nurse in 21st Century Britain: An auto-ethnography: James wrote a narrative of his day as it happened. He then used critical analysis to find the overarching themes of feelings, tasks, and behaviors in the narrative of his shift. Osben’s aim in his research was to use “reflective practice.” Reflective practice is one of the ways a researcher reflects on, contextualizes, and reframes their experiences for their research.
  • Jay Johnson Theil explored her place in academia using ethnographic methods in her research Working-class women in academic spaces: finding our muchness: Theil used photography, reflective journaling, movie quotes, and storytelling as research methods, then applied critical analysis to better understand how she really feels about her role in academia. 

Autoethnographic research works a lot like exploratory testing

If I’ve explained this well enough you may already see what I saw when first learning about this technique — autoethnographic research sounds like exploratory testing! I’ve long advocated for paying close attention to your thoughts and feelings as you work your way through an exploratory testing session. With this research method, you can take your exploratory testing even further. Here are a few ways to apply this method to your testing. 

Journal your feelings while testing. As you work your way through whatever you’re testing, focus on how you feel and what you think. Do you feel frustrated, delighted, or confused? Whatever feelings the application elicit from you, write them down as you feel them. You may also consider other types of journaling using methods like color coded or face sentiment trackers.

Use voice narration. Narrate your thoughts and feelings to a voice app or Zoom session. To make sharing the narrated session easier for your team, you can use a transcription app or speech to text.

Reflect at the end of testing sessions. If you’re more interested in overall sentiment as opposed to thoughts and feelings in the moment, consider writing or narrating a reflection of thoughts and feelings from the testing session. 

Debrief with your peers. Many teams hold debriefing sessions on a regular basis after exploratory testing. These sessions are the perfect time to share your thoughts and feelings documented in the session.

I will be the first to admit that I can’t remember what I did yesterday. Over time, I tend to forget my feelings towards the application I’m testing, which can hinder quality because my brain isn’t attune to looking for trends. Starting to use these autoethnographic techniques on an individual level will help you uncover trends in your reactions, thoughts, and feelings related to the application while you test. 

You’ll have an even more impactful and beneficial experience if you use these techniques with your team and track over time. Some ways to do so are:

  • Bug summary journal: Keeping a log of bugs you’ve found and experienced in your application along with sentiment tracking towards each bug helps demonstrate the impact on quality that you feel each bug has. This will also help you see if there are periods when you feel bugs are related to sloppy code or a lack of understanding of the problem the team is trying to solve. 
  • Daily journaling: Tracking your thoughts and feelings about the application on a daily basis will give you a timeline of your sentiment toward the application. This ongoing log gives you an opportunity to identify trends and track confidence in quality in your application over time.
  • Daily reflections: Similar to reflections at the end of a testing session, a daily reflection gives you a summary of the day. Over time, these reflections will expose trends and when compared with peers, may expose shared thoughts and feelings about the application that haven’t been identified previously. 
  • Daily sentiment tracking: A lighter lift than some of the other options that can still expose trends. Each team member tracks their sentiment towards the application each day using either faces, color codes, or other tracking methods. This simplified tracking can help demonstrate changes in confidence in quality over time.

Integrating these techniques is a great way to help your team shift from testing to quality advocacy and demonstrate your ability to identify the quality indicators that have the biggest impact on the overall health and usability of your application.

Give these techniques a try with a free downloadable template from our Github resources.

Author

Jenna Charlton, QA and Developer Advocate

Jenna is a QA and Developer Advocate. They’ve spoken at a number of dev and test conferences and are passionate about testing, agile, teams, and developing the next generation of testers

Qase is an exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Software Testing Tagged With: EuroSTAR Conference, Expo

Leveraging Effective Test Management to Mitigate Economic Uncertainty

May 24, 2024 by Lauren Payne

Economic uncertainty looms like a dark cloud over businesses, casting a shadow of unpredictability and challenges. From sudden market fluctuations to geopolitical events and policy changes, the business landscape is filled with obstacles that can cause budgets to shrink, timelines to shorten, and resources to become scarce. And it’s a global thing. The economic uncertainty that accompanies 2023 affects organizations all over the world and across different industries.

Effective test management can be a key strategy here, providing a solid foundation to reduce economic uncertainty and enable rapid adaptation to market changes. In this blog, we delve into the realm of test management and its remarkable potential to counter the adverse effects of economic uncertainty.

Understanding Economic Uncertainty

Economic uncertainty refers to a condition in which the future state of the economy, including factors such as growth, inflation, employment, and overall financial stability, becomes uncertain or unpredictable. Let’s break down the sources of uncertainty and the potential consequences following it.

Sources of Economic Uncertainty

  • Market fluctuations: rapid shifts in supply and demand, changes in consumer behavior, or economic downturns can create a volatile and uncertain market environment.
  • Geopolitical factors: political instability, trade conflicts, or regulatory changes can interrupt economic cycles and introduce uncertainty.
  • Policy changes: changes in fiscal policies, tax regulations, or government interventions can impact business operations and investment decisions, leading to increased uncertainty.
  • Global events: natural disasters and global economic or health crises (such as the COVID-19 pandemic) can significantly affect businesses worldwide.

Consequences of Economic Uncertainty on Businesses

  • Lower consumer confidence: economic uncertainty can harm consumer confidence, leading to cautious spending patterns and a decline in demand for products and services. Companies might see lower revenue as customers are more likely to cut expenses.
  • Financial instability: fluctuating market conditions and uncertain economic outlooks can pose financial challenges, including cash flow constraints, difficulty securing financing, or increased borrowing costs.
  • Investment hesitation: economic uncertainty often makes businesses more risk-averse, causing delays in capital investments, expansion plans, or research and development initiatives.
  • Supply chain disruptions: uncertainty can impact supply chains, causing disruptions in sourcing materials, increased costs, or delays in production and delivery.

The Value of Effective Test Management

Efficient software testing management can fulfill a vital role in mitigating economic uncertainty by providing businesses with structured approaches for quality assurance. The testing process is comprehensive and includes test planning, creation, execution, and defect management that are crucial to deliver high-quality software to the end-users.

Risk Management & Early Bugs Detection

When implemented effectively, test management plays a pivotal role in risk management and the early detection of bugs, benefiting companies in numerous ways.

By conducting thorough software testing, organizations can manage product-related risks by identifying and addressing them in the early stages of development. This proactive approach prevents these defects from escaping into production – when they are more costly to fix – and impacting the end-user’s experience. The end result is a reliable software product that meets business requirements and customer expectations.

High Flexibility & Adaptability

During uncertain times, project requirements may frequently change due to evolving market conditions or business priorities. Combining Agile practices in your software testing management enhances the organization’s ability to quickly respond to evolving requirements or changes in customer demands. Test managers collaborate with other stakeholders to understand the updated requirements, adjust test plans and strategies accordingly, and communicate any necessary changes to the testing team.

This way, companies can optimize software functionality and align it with shifting economic landscapes.

Combining Automation Testing

Automation plays a significant role in reducing costs and improving efficiency in software testing. Test managers leverage automation tools to perform tests that are prone to human error or extremely time-consuming. Businesses can significantly increase productivity and complete complex tests in a shorter time frame with high confidence, knowing the results are reliable. As automation eliminates the need for manual intervention, it minimizes the risk of human error and enables testers to focus on other critical aspects of the testing process.

Enhance Efficiency with a Test Management Platform

A great way to further improve the software testing management is using a dedicated test management tool. These comprehensive platforms offer a centralized solution for managing all types of testing activities such as planning, executing, tracking, and reporting. This helps to better manage test cases and defects, categorize them by status, prioritize them, and assign them effectively between staff that are on the same page.

One of the main benefits of these platforms is the reusability of tests. Rather than reworking and creating tests from scratch, QA testers can save precious time by reusing existing tests in other relevant projects or sprints. The tests that are designed for automated testing can be also managed through a test management platform. With powerful integration with automation frameworks and tools, QA managers can manage all types of tests within one platform and gain full transparency over the testing process.

Test management platforms provide comprehensive reporting capabilities, enabling test managers to generate meaningful reports of different testing artifacts. These reports help identify bottlenecks, track important QA metrics, and enable data-driven decision-making for process improvement.

With a test management platform, test managers and teams can streamline and optimize their testing efforts, resulting in improved efficiency, enhanced collaboration, and higher-quality software.

3 Tips for Effective Test Management

Here are three tips to help you navigate through these challenges and ensure effective test management:

Understanding & Adjusting Objectives

As customer and business needs rapidly change during economic uncertainty, it is essential for QA managers to closely collaborate with stakeholders. By working together, they can gain a deep understanding of the evolving needs and align internal QA objectives accordingly.

Transparent communication and increased collaboration are key elements of aligning testing assignments with the dynamic requirements. Prioritizing testing tasks according to these needs ensures that limited resources are utilized effectively, optimizing efficiency and customer satisfaction.

Embracing Modern Agile Practices

Agile methodologies offer numerous benefits in uncertain times. With Agile principles, such as flexibility, collaboration, and shifting left, organizations can respond quickly to changing needs and adapt their testing processes accordingly.

Incorporating concepts like Continuous Integration and Continuous Delivery (CI/CD) enables automated and frequent software releases, allowing for quick feedback and efficient bug fixes. Agile testing techniques such as exploratory testing, BDD, and automation further enhance adaptability and speed in a rapidly changing environment.

Embracing Modern Agile Practices

The final tip for effective test management is using a variety of testing tools. Utilizing multiple test automation and CI/CD tools covers diverse testing tasks, allowing for comprehensive and automated testing processes to be completed faster than ever. In addition, implementing a robust test management platform centralizes testing activities, streamlines collaboration, and provides a clear visibility into the testing progress from an end-to-end. The combination of testing tools will result in optimizing testing efforts and higher quality deliverables.

Summary

In the face of economic uncertainty, effective test management becomes essential for businesses to navigate challenges, mitigate risks, and deliver high-quality software products. In uncertain times, understanding dynamic customer needs, embracing modern Agile practices, and leveraging testing tools can help test managers better align with evolving customer requirements and enhance testing efficiency.

Additionally, leveraging testing automation tools and a robust test management platform such as PractiTest can increase productivity and ensure effective team collaboration. By implementing these strategies and adopting a proactive approach, organizations can navigate economic uncertainty with confidence, delivering reliable software that meets customer expectations.

Author

PractiTest exhibitors at EuroSTAR

Practitest

Practitest is an end-to-end SaaS test management platform that centralizes all your QA work, processes, teams, and tools into one platform to bridge silos, unify communication, and enable one source of truth across your organization.

With Practitest you can make informed data-driven decisions based on end-to-end visibility provided by customizable reports, real-time dashboards, and dynamic filter views. Improve team productivity; reuse testing elements to eliminate repetitive tasks, plan work based on AI-generated insights, and enable your team to focus on what really matters.

Practitest helps you align your testing operation with business goals, and deliver better products faster.

Practitest is an exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Development Tagged With: 2024, EuroSTAR Conference, Expo

Embracing Crowdtesting for Quality Assurance: A Strategic Imperative for Software Development

May 22, 2024 by Lauren Payne

In an era marked by rapid digital transformation, the quality of software products has emerged as a linchpin of success for companies across the globe. Digital natives, who represent future generations, are shaping market trends more than ever before, leading to a high demand for flawless, easy-to-use, and feature-packed products. In today’s evolving landscape, organizations need to rethink their approach to do quality assurance (QA) and product testing, recognizing the necessity of integrating native quality management with crowdtesting methodologies. This holistic integration ensures comprehensive coverage and adaptability to meet the demands of today’s dynamic market.

The Imperative for a Quality-Centric Culture

The cost of neglecting quality in software development can be staggering. Companies that fail to cultivate a culture deeply rooted in quality management face not only financial losses from rectifying errors but also damage to their brand reputation and customer trust. A quality-centric culture is not merely about detecting and fixing bugs; it’s about embedding quality into every phase of the development lifecycle, from initial design to final release and further iterations. Adopting a native quality management approach involves seamlessly integrating QA processes with development workflows, ensuring that QA and development teams collaborate closely.

Crowdtesting: Leveraging the Power of Real-World Feedback

As the digital landscape becomes increasingly user-driven, understanding and meeting the diverse needs of various user segments is critical. Crowdtesting emerges as a powerful solution to this challenge, enabling organizations to test their products in real-world scenarios across the big number of devices, operating systems, and user environments. This approach not only validates the functionality and usability of products but also uncovers nuanced insights into user preferences and behaviors, facilitating a deeper connection with the target audience. Crowdtesting bridges the gap between theoretical QA and practical, user-centric validation. By engaging a targeted group of users from the intended market segment, companies can gather actionable feedback on their products’ performance, usability, and appeal. This method provides a more nuanced understanding of subjective user experiences, enabling developers to refine their products in ways that resonate with their audience’s expectations and preferences.

Integrating Quality Management and Crowdtesting

The integration of native quality management and crowdtesting represents a comprehensive strategy for achieving excellence in software development. This dual approach ensures that quality is not only baked into the development process but also validated through extensive, real-world testing. By measuring quality maturity and incorporating crowdtesting feedback early and throughout the product lifecycle, companies can anticipate and mitigate potential issues, streamline their development processes, and enhance product quality. Such an integrated approach also fosters a culture of continuous improvement and innovation. As teams become more aligned on quality objectives and gain insights from direct user feedback, they are better equipped to make informed decisions, prioritize features, and deliver products that truly meet, if not exceed, user expectations.

Conclusion: The Future of Software Development is User-Driven

The digital age demands a new paradigm in software development—one that places quality and user experience at the forefront. By embracing a quality-centric culture and integrating crowdtesting into the product development lifecycle, companies can navigate the complexities of modern software development more effectively. This strategic imperative not only enhances product quality and user satisfaction but also positions companies for sustained success in a competitive digital marketplace. As we look to the future, crowdtesting will undoubtedly become a cornerstone of successful software development. It promises not just better products but also a deeper understanding of the ever-evolving digital consumer, ensuring that companies can continue to innovate and thrive in the digital age.

Figure. 1: The next Level of digital excellence: Embrace Crowdtesting

Author


Stephan Ingerberg, Head of Sales, msg Test & Quality Management

Stephan Ingerberg is a seasoned professional with over a decade of experience in the realm of software quality and digital assurance. He is a dedicated desciple of quality and testing since 2004.

Currently serving as a pivotal figure in the Test & Quality Management division of msg, responsible  for sales, customer relations and commercial aspects within central Europe. His unwavering dedication to excellence and adept navigation of software quality make him indispensable in the pursuit of digital perfection.

https://www.linkedin.com/in/stephan-ingerberg-digital-transformation/

msg Test & Quality Management is an Exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: DevOps Tagged With: 2024, EuroSTAR Conference, Expo

Empowering Functional Manual Testers

May 20, 2024 by Lauren Payne

Let’s be honest. The dream of 100 percent test automation often turns out to be a nightmare. After all, it’s not just about writing a few scripts. Successful test automation needs to be well thought out, requires a test architecture and, above all, time. If you let the reins slip for even a few iterations, technical debt will creep in. In the worst case, the test cases become flaky. 

Less spectacular, but just as painful in the long run, is gentle erosion. While the system to be tested is constantly evolving, the automated scripts lag behind. They may still run successfully, but over time they become less and less meaningful.

“Not true!”, you are probably thinking. “We do test-driven development.” That’s laudable, but probably only part of the truth. Because you certainly have them too – the functional manual testers who cover the upper part of the test pyramid and look at the system as a whole, perhaps even in interaction with other applications. 

The Importance of Functional Testing

Functional testing is important. Of course, as much as possible should be checked during unit testing, but to be honest: the nasty bugs are hidden between the components. An example:

An app requires 2-factor authentication for login. Therefore, an 8-digit code consisting of letters and numbers is sent by email. Unfortunately, the app only allows numbers to be entered.

This example is not made up and illustrates the difference between unit and system testing. Each component complied with its specification and was tested successfully. Nevertheless, the overall system is unusable because the error was not in the code, but in the specification. Functional testers find such errors and that is what makes them so valuable.

Manual testing has never been easy, but today, in the age of Agile, it’s a pure race against time. If they don’t want to be left behind, they need to understand the intricacies of automated functional testing. Because test automation relieves them of the repetitive regression tests and allows them to focus on the important things, such as new features.

Fundamental Principles of Effective Functional Test Design

In fact, there are several common principles that are important in both manual testing and test automation:

  • Clarity : Clarity is key. Structured, easy-to-grasp tests improve comprehension and minimize ambiguity, benefiting all stakeholders. Visual diagrams have long helped developers simplify complex problems. Testing can benefit from similar visualization, as functional tests mirror the intricate nature of systems under test. Visual test design definitely improves clarity, making it easier for test automation engineers to understand the business aspects of functional testing.
  • Modularity : By breaking down complex scenarios into manageable test cases, manual testers can lay the groundwork for seamless automation, ensuring that each test remains a valuable asset throughout the software development lifecycle.
  • Maintainability : Functional tests will continue to evolve, as will the associated test scripts. Some changes affect the technical level, others the functional level. Keyword-driven testing is a proven method of separating these two levels. Manual testers can thus contribute to the maintenance of automated tests without having to program themselves.

The goal of an effective, functional test design must therefore be to develop tests that are easy to maintain and update.

Transitioning from Manual to Automated Execution

Automation is often viewed as process optimization, as it takes over error-prone, repetitive tasks. Testers can use the time gained to focus on those tasks that require human judgment. Executing existing tests repeatedly following the accelerating rhythm of testing cycles is certainly not a mentally demanding task. The dumb writing down of test procedures is not a great intellectual accomplishment either. The real value lies in the preliminary considerations: What situations could occur? How should the system behave in this case? What can I do to push the system beyond its limits? Automation is therefore also a helpful and welcome support when creating test cases.

Studies[1] show that the combination of intellectual performance in test design and automated test execution not only increases test speed, but also the coverage of requirements and code. In general, the reliability of tests is increased when automated scripts support manual testers, especially in lengthy tests. This can be seen very clearly in load and performance tests, which are unthinkable without automation.

An intelligent automation strategy effectively balances human expertise with automated tasks.

A Practical Case Study – Yest Augmented by Maveryx

In this example, we show the successful marriage of two concepts represented by Yest and Maveryx.

Yest is a visual test design tool that implements a modern form of model-based testing (MBT) and test generation. Yest offers a whole range of functions to enhance the creative work of test design and speed up the painstaking task of writing test cases. Yest itself is agnostic, meaning that the generated scenarios may be used both for manual and/or automated test execution.

Maveryx is an automated software testing tool that provides functional UI, regression, data-driven and codeless testing capabilities for a wide range of desktop and Web technologies. Its innovative and intelligent technology inspects the application’s UI at runtime as a senior tester does. With Maveryx, there is no need for code instrumentation, GUI capture, maps and object repositories.

From Visual Test Design for Manual Tests…

With Yest Augmented by Maveryx, the functional manual testers concentrate on creating graphical workflows and define actions including the expected results for the various cases to be tested. Yest then generates test scenarios from these workflows and the stored information, which can be executed directly manually.

…to Automated On-The-Fly Test Execution

But can they also be automated and, if so, what results can we expect? This is where Maveryx comes into play. All you have to do is to provide a detailed instruction of manual steps in Yest. Yest Augmented by Maveryx recognizes these instructions for manual execution (for example “Click on Submit”) and executes them automatically.

If a step is not interpretable or fails, Yest Augmented by Maveryx stops and waits for manual input. You may then abort the test or do the necessary steps manually and continue. The execution results are reported in Yest Augmented by Maveryx or in the test management you read the test cases from.

Use Cases

Yest Augmented by Maveryx serves in various situations:

  1. Hardening the test procedure : Manual tester may execute their tests without having to code a single line of code. This provides them with rapid feedback on the quality of their tests. No more tedious analyses and hot disputes about whose fault it was.
  2. Executing manual tests directly from Xray or MS Excel : During official test runs, manual testers can call upon Yest Augmented by Maveryx to do part of the work. In fact, it is possible to import tests from Xray or MS Excel and execute them even without having used Yest Augmented by Maveryx for test design.

As functional manual testers embark on their automation journey, they will gain valuable insights into creating robust, maintainable test suites that stand the test of time.

Conclusion

This blog aims to guide manual functional testers through the complicated process of developing functional tests and harnessing the transformative power of automation. In an age where software development demands agility and speed, understanding the intricacies of automated functional testing is critical for testers looking to optimize their workflows.

Clarity, modularity and maintainability are key success factors for a successful transition from manual to automated test execution. Visual test design and model-based test case generation pave the way to structured tests and complete coverage. With the right tool support, it is possible to carry out these tests automatically without having to write a single line of code. Yest Augmented by Maveryx provides this support. Contact us to learn more at info@maveryx.com or contact@smartesting.com


[1] Khankhoje, Rohit. (2023). Revealing the Foundations: The Strategic Influence of Test Design in Automation. International Journal of Computer Science and Information Technology. 15. 10.5121/ijcsit.2023.15604.

Authors


Alfonso Nocella
, Co-founder and Sr. Software Engineer at Maveryx 

Alfonso led the design and development of some core components of the Maveryx automated testing tool. He collaborated in some astrophysics IT research projects with the University of Napoli Federico II and the Italian national astrophysics research institute (INAF). Over the decades, Alfonso worked on many industrial and research projects in different business fields and partnerships. Also, he was a speaker at several conferences and universities.

Today, Alfonso supports critical QA projects of some Maveryx customers in the defense and public health fields. Besides, he is a test automation trainer, and he takes care of the communication and the technical marketing of Maveryx.


Anne Kramer
, Trainer and Global CSM at Smartesting

Anne Kramer first came into contact with model-based test design in 2006. Since then she has been burning for the topic. Among other things, she was co-author of the “ISTQB FL Model-Based Tester” curriculum and lead author of the English-language textbook “Model-Based Testing Essentials” (Wiley, 2016).

After many years of working as a process consultant, project manager and trainer, Anne joined the French tool manufacturer Smartesting in April 2022. Today, she is fully dedicated to using models for testing purposes. This includes visual test design and, more recently, generative AI.

Maveryx is an Exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Software Testing Tagged With: 2024, EuroSTAR Conference, Expo

How to Build a More Valuable End-to-End Testing Strategy

May 17, 2024 by Lauren Payne

End-to-end testing is one of the most effective ways software teams can understand the customer experience. Unlike unit or component testing, which focus on individual pieces of the application, E2E tests seek to understand product quality as an integrated journey. In many ways, end-to-end testing exemplifies the expanding role of software testing in a DevOps world: the crucial connection between how software is built and how it is used. When development teams understand how their changes will impact their end users, they’re better able to deliver value to those customers. When automated effectively, end-to-end testing provides this connection quickly enough to support continuous delivery.

But the shift to digital-first experiences means that end-to-end testing needs to evolve and expand, running contrary to established testing best practices. Even a simple user journey, such as the one outlined below, likely involves multiple third-party APIs and services as well as email touchpoints and personalized offers or recommendations. Development teams must build seamless user experiences that make a complex customer journey feel simple. Expanding the definition of end-to-end testing ensures they can do so successfully. But with traditional testing frameworks, complicated automated testing meant a high risk for broken tests, extra maintenance, and inaccurate results, which ultimately slowed down development pipelines. Quality teams instead opted for shorter, simpler end-to-end tests that were less likely to break as the product evolved. But the holistic view provided by true end-to-end testing is extremely valuable – if quality teams have the tools to manage them.

New Customer Journeys Demand Broader End-to-End Testing

the testing pyramid and an example end-to-end test for an ecommerce website

The image above includes an example of an end-to-end test for an ecommerce website. Despite this being a fairly simple – and common – transaction, an automated test needs to cover a marketing email, a coupon code, and an invoice email with a PDF attachment. But the story doesn’t end there: it’s extremely likely that the checkout test step includes an API for a payment service like Square or Afterpay. It’s also likely that coupon codes are personalized for customers, given that loyalty programs with customized rewards are proven to increase consumer spending.

Skip these steps, and there is a real risk to revenue. If a marketing email fails to accurately show a customer’s coupon code, conversion rates will suffer, impacting sales and potentially churning previously loyal customers. Managing this type of comprehensive test is essential for supporting quality customer experiences.

The Challenges of Comprehensive End-to-End Testing

Though the above end-to-end test is critical for understanding the user experience and how each change will impact it, such tests pose several challenges for developers and software testers. First, maintaining such an extensive automated test with scripted test automation frameworks is likely to consume a significant amount of a testing team’s time and effort, which has a serious impact on an organization’s ability to accelerate product velocity. Since additional test steps increase the risk of a test breaking, most quality teams avoid creating longer end-to-end tests in order to reduce the burden of test maintenance. But what they gain in reliability, they lose in test coverage.

Second, comprehensive automated tests often require longer investigations into failures. Combing through a long list of test steps to identify the specific cause of a test failure can take valuable hours, a luxury development teams don’t have as delivery cycles shorten. Considering that 44% of developers say that investigating failed tests is a significant pain point, quality teams must have effective strategies in place to triage comprehensive end-to-end tests when necessary.

Maintaining More Complex End-to-End Tests

An end-to-end test covering email, API, and non-functional test steps is highly susceptible to any product changes, but advances in AI and machine learning have reduced the amount of time and effort needed to maintain automated tests, making it possible for quality teams to manage comprehensive end-to-end tests. Using unique identifying elements across an application’s UI, including shadow DOM components, intelligent test automation solutions can detect product changes and update end-to-end tests accordingly.

Automating end-to-end test maintenance not only ensures that test maintenance is less labor-intensive, but also allows more team members to contribute. For example, manual testers can more easily collaborate on E2E tests that contain integrated API tests, ensuring that comprehensive end-to-end tests capture the full user journey and accurately assess quality.

Reporting on End-to-End Testing

Even when end-to-end tests are maintained, identifying the root cause of an error can be time-consuming, causing delays and disruptions in the later stages of the SDLC. Rapid results that support fast bug resolution is critical for delivering exceptional user experiences at the speed of DevOps.

Advances in cloud-based testing and the availability of SaaS test automation tools are making it easier to scale and maintain comprehensive end-to-end testing strategies. Cloud-based runs give in-depth insights that support continuous improvement, and can be run on a schedule or as part of a CI/CD pipeline. Flexible execution options make it possible to routinely and reliably run comprehensive end-to-end tests without slowing development. But perhaps even more importantly, integrating end-to-end test automation into existing development workflows allows developers to quickly act on end-to-end test results.

Building workflows that surface comprehensive end-to-end test results in a digestible way supercharges their value. Sharing test results as Jira tickets, complete with screenshots of the point of failure, DOM snapshots, and performance logs, is ideal for triaging comprehensive end-to-end tests since developers can easily identify what test step caused the failure. The time from failure to fix becomes much shorter, making comprehensive end-to-end tests highly actionable.

The Future of End-to-End Testing

comprehensive end-to-end tests are often considered too time-consuming to provide real value to development teams. But their ability to ensure quality of the perspective of the customer is invaluable, even essential, in a time where every business is competing on their digital customer experience. Overcoming test maintenance, execution, and investigation obstacles to comprehensive end-to-end tests gives development organizations a powerful tool for understanding how changes will impact their users. And with the right test automation solution, end-to-end testing becomes an adaptable process that can continuously evolve to match real customer needs. A few examples include automated accessibility checks, integrated API tests, shadow DOM components, and cross browser testing. No matter what your customers need, comprehensive end-to-end tests will help your team deliver exceptional user experiences. 

Author


Bridget Hughes., Content Marketing Manager at mabl

Bridget is the Content Marketing Manager at mabl, the unified test automation platform for delivering modern software quality. She’s dedicated to helping quality teams expand testing and improve product quality through educational blogs, articles, and the occasional software testing meme.

Mabl is an Exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Software Testing Tagged With: 2024, EuroSTAR Conference, Expo

Prompt-Driven Test Automation

May 15, 2024 by Lauren Payne

Bridging the Gap Between QA and Automation with AI

In the modern software development landscape, test automation is often a topic of intense debate. Some view it strictly as a segment of Quality Assurance, while others, like myself, believe it intersects both the realms of QA and programming. The Venn diagram I previously shared visualizes this overlap.

Historically, there’s a clear distinction between the competencies required for QA work and those needed for programming:

Skills Required for QA Work:

  • Critical Thinking: The ability to design effective test cases and identify intricate flaws in complex systems
  • Attention to Details: The ability to ensure that minor issues are caught before they escalate into major defects.
  • Domain knowledge: A thorough understanding of technical requirements and business objectives to align QA work effectively.

Skills Required for Programming:

  • Logical Imagination: The capability to deconstruct complex test scenarios into segmented, methodical tasks ripe for efficient automation.
  • Coding: The proficiency to translate intuitive test steps into automated scripts that a machine can execute.
  • Debugging: The systematic approach to isolate issues in test scripts and rectify them to ensure the highest level of reliability.

We’re currently at an AI-driven crossroads, presenting two potential scenarios for the future of QA. One, where AI gradually assumes the roles traditionally filled by QA professionals, and another, where QAs harness the power of AI to elevate and redefine their positions.

This evolution not only concerns the realm of Quality Assurance but also hints at broader implications for the job market as a whole. Will AI technologies become the tools of a select few, centralizing the labor market? Or will they serve as instruments of empowerment, broadening the horizons of high-skill jobs by filling existing skill gaps?

I’m inclined toward the latter perspective. For QA teams to thrive in this evolving ecosystem, they must identify and utilize tools that bolster their strengths, especially in areas where developers have traditionally dominated.

So, what characterizes such a tool? At Loadmill, our exploration of this question has yielded some insights. To navigate this AI-augmented future, QAs require:

  • AI-Driven Test Creation: A mechanism that translates observed user scenarios into robust test cases.
  • AI-Assisted Test Maintenance: An automated system that continually refines tests, using AI to detect discrepancies and implement adjustments.
  • AI-Enabled Test Analysis: A process that deploys AI for sifting through vast amounts of test results, identifying patterns, and highlighting concerns.

When it comes to actualizing AI-driven test creation, there are two predominant methodologies. The code-centric method, exemplified by tools like GitHub Code Pilot, leans heavily on the existing codebase to derive tests. While this method excels in generating unit tests, its scope is inherently limited to the behavior dictated by the current code, making it somewhat narrow-sighted.

Contrarily, Loadmill champions the behavior-centric approach. An AI system that allows QA engineers to capture user interactions or describe them in plain English to create automated test scripts. The AI then undertakes the task of converting this human-friendly narrative into corresponding test code. This integration of AI doesn’t halt here – it extends its efficiencies to areas of test maintenance and result analysis, notably speeding up tasks that historically were time-intensive.

In sum, as the realms of QA and programming converge, opportunities for innovation and progress emerge. AI’s rapid advancements prompt crucial questions about the direction of QA and the broader job market. At Loadmill, we’re committed to ensuring that, in this changing landscape, QAs are not just participants but pioneers. I extend an invitation to all attendees of the upcoming conference: visit our booth in the expo hall. Let’s delve deeper into this conversation and explore how AI can be a game-changer for your QA processes.

For further insights and discussions, please engage with us at the Loadmill booth.

Author


Ido Cohen, founder and CEO of Loadmill

Ido Cohen is the Co-founder and CEO of Loadmill. With over a decade of experience as both a hands-on developer and manager, he’s dedicated to driving productivity and building effective automation tools. Guided by his past experience in coding, he continuously strives to create practical, user-centric solutions. In his free time, Ido enjoys chess, history, and vintage video games.

Loadmill is an Exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Test Automation Tagged With: 2024, EuroSTAR Conference, Expo

Operationalizing BDD Scenarios Through Generative AI

May 13, 2024 by Lauren Payne

Behavior Driven Development (BDD) is a well-regarded way to write application requirements as scenarios that describe the behaviour in various contexts. BDD evolved from the agile movement and its emphasis on Test-Driven Development (TDD) to take things one step further than simple user stories and document a user’s behaviour when they use a system.

Typically using the Gherkin syntax, a user scenario in BDD is written the following way:

  • Given: the initial context at the beginning of the scenario, in one or more clauses;
  • when: the event that triggers the scenario;
  • then: the expected outcome, in one or more clauses.

Gherkin is a business readable language that helps you to describe business behavior without going into details of implementation. It is a domain-specific language for defining tests in a standardized format for specifications. It uses plain language to describe use cases and allows users to remove logic details from behavior tests. For example:

The primary benefit of BDD is that it encourages communication between developers and other stakeholders, such as product owners and users. BDD helps bridge the gap between technical and non-technical stakeholders by providing a common language for discussing the behavior of the system. By using this language, stakeholders can understand each other’s needs and expectations, leading to better development decisions.

In addition to being a useful way of describing a requirement specification, the text in the Gherkin language acts as both documentation and the skeleton of your automated tests. For example, test automation engineers often take the Gherkin scenarios and use a framework like Cypress or Robot Framework to turn these high-level user interactions into executable test automation scripts. However, this process is manual and can be time consuming, with the automation engineers having to hand-write large amounts of Python or JavaScript to turn one BDD scenario into a functioning automated test.

Enter the Power of Generative AI

With Generative Artificial Intelligence (GenAI), you can use the power of Large Language Models (LLM) to automate a lot of this process. Currently, our SpiraPlan quality and test management system uses GenAI to automatically generate BDD scenarios, test cases, and risks from simple agile user stories:

Figure 1: BDD Gherkin Scenario Generated by AI.

However, this is just the beginning of what will soon be possible!

Using the latest LLMs such as GPT4, we can pass in the BDD scenario text as a prompt to the LLM and it will generate a set of page objects and associated page object models functions/methods. This means that a simple human-readable scenario can automatically turn into a Selenium-style set of page object model function calls.

Figure 2: A Human readable scenario

Figure 3: An automated test script using page objects, automatically generated by AI.

Finally, when you feed either in a specially tagged image of the application or a reduced version of the page DOM (to avoid using too many GenAI tokens), the LLM is able to implement each of the page object model functions into the appropriate code to interact with the application and test its user interface. These could be either image-based clicks or WebDriver-style CSS selectors depending on what you used to prompt the model.

This means we are close to having the holy grail of taking a BDD scenario and automatically converting it into an 80-90% ready-to-run automated testing script.

What is the Role of Testers?

As we often say, GenAI is not here to replace humans, but instead to assist. If we consider this new Human-AI team, the human testers’ job is to create/review scenarios from the AI, review and optimize draft automation code from the AI, and look for weaknesses, edge conditions, and missing cases. Working together, the Human-AI team will be able to reduce higher-quality applications in a faster time than ever thought possible!

Author


Adam Sandman, Director of Technology at Inflectra

Adam Sandman has been a programmer since the age of 10 and has been working in the IT industry for the past 20 years in areas such as architecture, agile development, testing, and project management. Currently, Adam is a Director of Technology at Inflectra, where he is interested in technology, business, and enabling people to follow their passions. At Inflectra, Adam has been responsible for researching the tools, technologies, and processes in the software testing and quality assurance space. Adam has previously spoken at STARWEST, Agile + DevOps West, STPCon, Swiss DevOps Fusion, InflectraCon, TestingMind, EuroSTAR, Agile Testing Days, and STARCANADA.

Inflectra is an Exhibitor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Software Testing Tagged With: 2024, EuroSTAR Conference, Expo

  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us