• Skip to main content

EuroSTAR Conference

Europe's Best Software Testing Conference

  • Programme
    • 2024 Programme Committee
    • 2023 Programme
    • 2023 Speakers
  • Attend
    • Venue & Hotels
    • Bring your Team
    • Testimonials
    • Why Attend
    • EuroSTAR Awards
    • Academic Discount
  • Partner
    • Download our EXPO Brochure
    • Partner Opportunities
    • Partner Testimonials
  • About
    • About EuroSTAR
    • FAQ
    • EuroSTAR Timeline
    • EuroSTAR Blog
    • Supporting Organisations
    • Contact Us
  • Book Now

Development

Moving Beyond Traditional Testing: The Need for Autonomous Testing in Software Development.

July 24, 2023 by Lauren Payne

Thanks to Hexaware for providing us with this blog post.

Software testing is struggling to keep up with the fast-paced and constantly accelerating rate of releases. According to a survey by Gitlab in 2022, seven out of ten developers reported that their teams release code at least every few days, with many doing so on a daily basis. In today’s world, customers and end-users expect new features and functionality at an increasingly rapid pace. Companies that lag behind on new software releases risk being displaced by competitors who can keep up with the latest updates.

When testing fails to keep up with the release pace, organizations face well-known risks associated with releasing software that hasn’t been adequately tested and may contain bugs. For instance, in July 2022, former Volkswagen CEO Herbert Diess was forced out of the company because the automaker’s software unit was unable to produce software of sufficient quality, delaying the launch of its new Porsche, Audi, and Bentley models. Even more recently, in October 2022, Nintendo had to take Nintendo Switch Sports’ servers offline for nearly a week due to a bug that caused the game to crash.

Development teams have attempted to address this dilemma of either issuing potentially buggy software faster or slowing down to test sufficiently with test automation. However, there are significant challenges associated with how test automation is traditionally implemented, and automation still requires highly skilled testers who are always in high demand, making them difficult to hire and retain.
Testing organizations face challenges beyond just automating the creation of tests. Maintaining tests is equally challenging as automation scripts can become outdated and fail to test the required functions in the desired ways. Even with enough testers available, analyzing the impact of changes and configuring the test suite is too complicated to be performed manually. However, the problem extends beyond maintaining automated tests as human analysis cannot identify all areas that require testing.

To overcome these challenges, organizations need to move beyond automation and embrace autonomous testing.

AI-Powered Autonomous Testing

Autonomous testing is the solution to the challenges faced by testing organizations as it enables faster decision-making about which scenarios to test based on the impact of a change without relying too much on human involvement. This dramatically increases testing depth and scope while simultaneously speeding up the process.

In contrast, traditional test automation only addresses one stage of the testing process, which is the automated script execution in the DevOps pipeline, as illustrated in Figure 1.

Traditional Testing Process
Automation Beyond the DevOps Pipeline

Autonomous testing has the potential to significantly reduce the need for human involvement throughout the testing process (as shown in Figure 2), unlike traditional test automation, which only impacts script execution in the DevOps pipeline (as shown in Figure 1). By utilizing natural language processing (NLP) and machine learning (ML) technologies, organizations can automate the generation of feature files and autonomous scripts. With the addition of deep learning through a support vector machine (SVM), tests can be auto-configured, and cases can be identified for execution when there are changes to code or requirements. Autonomous testing can also perform failure analysis and take corrective action.

As the AI continues to learn from development behavior, test results, and other data, it becomes smarter and more accurate. For example, post-production logs are rarely used, but AI can analyze them and match them to post-production to identify previously unidentified “white spaces” that are likely to have bugs in the future and therefore require testing.

It is crucial to understand that autonomous testing is not a one-time fix, but a continual process of improvement, one case at a time. Organizations can start by identifying a specific bottleneck in the testing process that autonomous testing can address, such as the generation of UI/API scripts or identifying sensitive columns that require masking or synthetic data replacement. Ideally, the case should involve different functions for a particular phase to have a more significant impact. Once the solution has been successfully implemented and shown results, organizations can leverage that success to expand to a new case and continue to improve their testing process over time.

Think of it in terms of autonomous driving. Automakers first rolled out discrete capabilities such as automatic breaking to avoid hitting a stationary object, lane assist, and adaptive cruise control. Implementing autonomous testing requires a similar approach.

Organizations are under pressure to conduct extensive testing within a shorter time frame and with fewer resources, all while delivering high-quality software on schedule. Autonomous testing, powered by AI and ML, can help organizations achieve this goal, but it requires a strategic, long-term approach to implementation. The ultimate outcome is that development teams can release new features more frequently, leading to a better customer experience and a stronger bottom line for the organization.

Learn More

Listen to a Thoughtcast that answers key questions about autonomous software testing and explainshow to move seamlessly from automation.

Reach us at [email protected] for more information.

Nagendra BS, Vice President – Digital Assurance, Practice & Solutions at Hexaware

Nagendra has around 21 years of experience in Software industry and is passionate about Quality and Testing and have helped number of customers in their testing transformation journey. He is currently responsible for Go to Market function of the Digital Assurance (Testing) business which includes creation of all service offerings, global presales support, alliances, analyst and marketing functions for Digital Assurance services.

Hexaware is an EXPO exhibitor at EuroSTAR 2023

Filed Under: Development, Test Automation Tagged With: EuroSTAR Conference

Is Behaviour Driven Development (BDD) right for API testing?

June 2, 2023 by Lauren Payne

Thanks to Karate Labs for providing us with this blog post.

The primary goal of BDD can be summarized as follows: to reduce misunderstandings between those who define software requirements and those who implement the software.

The best source for the history and origins of BDD can be found on the Cucumber web site. Cucumber is just one tool that implements an approach to BDD, but it is not the only one. But many teams assume that because they are using Cucumber as a tool – it means that they are successfully “doing BDD”. As the creators of Cucumber themselves have lamented about for years, this is a huge mistake.

Cargo Cult

To be more specific, a very common misconception is that if you use the keywords “Given”, “When” and “Then” in an automated test – it means that the team will magically enjoy the supposed benefits of BDD. This may sound far-fetched, but if you are leading a team that claims to be doing BDD, I recommend that you walk the floor a bit, and ask the lesser-experienced engineers what their understanding is. You may get some interesting insights into how the team is thinking about BDD.

I remember a point very early in my career where I switched to a shiny new unit-testing framework that had things like “given()” and “when()” and “then()” in the syntax. I remember that virtuous feeling of satisfaction. Having just read about this great approach called BDD in some article or blog, I was now part of that exclusive club! I proudly declared to some colleagues that I was doing BDD. It was probably a few years later when I realized how mis-informed I was.

So why am I sharing these somewhat embarrassing memories with you dear reader? I’m really trying to help you avoid the mistakes I made. In an almost mystical way, the notion that “Given When Then” EQUALS BDD is entrenched in the collective consciousness of development teams around the world. Maybe it is because of the numerous bad-takes on BDD in tools, examples, tutorials and blogs that exist. The pressure to do what is “cool” is real. Expecting a tool to “do BDD” is a mistake I have seen teams make – time and time again.

The feeling that badly done BDD evokes in me is the term “Cargo Cult Programming”. If you haven’t heard of the term, this Wikipedia entry explains the rather hilarious origin. “Cargo cult” is a great way to refer to a phenomenon which is all too prevalent in our field. Which is when teams hear about some “best practice”, adopt a bunch of ceremonies without truly understanding the fundamentals, and then stand back expecting to promptly enjoy the rewards and good things that should ensue.

Yeah, that never ends well.

BDD is not for testing code that already exists

Did you know that in its true form, BDD means that you should write your scenarios before a single line of code is written?

You should pause reading for a moment and think deeply about the implications of the above statement.

BDD is mostly about having conversations with your domain experts, business users or product owners on what the software requirements are. BDD also encourages that that discussion should result in “examples” which help in fleshing out the business-rules involved.

Instead of just descriptive text, examples with real data map a lot better to how programmers think about writing code and tease out the edge-cases.

In other words, if your software delivery team is trying to automate tests for an existing system, BDD is not what you should be doing! It doesn’t matter if the system is partially done or not, BDD is just going to slow you down.

There is an argument to be made that BDD results in “executable specifications” and much more readable tests. This is why some teams choose BDD even though they are writing BDD scenarios “after the fact”.

But as we will see below, there are elegant ways to achieve “readable” test reports that serve as documentation of what the system does. You don’t need to formally adopt BDD, and your test-scripts and test-reports will still be readable, even by non-programmers.

BDD for API testing

The end-user or consumer of an API is typically another programmer. Another way to look at this is that APIs are how computers to talk to each other, and expressing what APIs should do – is best done using concepts that are closer to code than natural language.

I have a strong point of view that BDD has very little (and perhaps negative) value for API tests. The big difference between a UI test (human facing) vs an API test (machine facing) is that an API test has a clear “contract” that you are coding to. This contract is best expressed in technical terms (JSON / schema) instead of the deliberate abstraction needed when you do BDD the right way.

For more insights on how API testing is simpler than UI testing, read our free e-book: Navigating the Brave New World of API Testing.

But I want my tests and reports to be readable!

If you are not using BDD, how can you ensure that your API tests are readable? Ideally, your test-reports should:
• serve as documentation of how your APIs work,
• and include examples of how to call them.

Here is where a mature automation tool that has built-in HTML reporting can add value. Shown below is part of a Karate test that exercises the “Restful-Booker” API playground. This is a realistic simulation of an API that allows the consumer to book a hotel reservation.

The test script is on the left and the test-report on the right. Since there are comments added before each business-operation, the test and test-report provide the best of both worlds: you not only get full-control over the API calls, payload-data and JSON assertions – but you also get a very readable (and hence maintainable) test.

Comments appear clearly in the report, in-line with the API calls that were made. The tests and test-reports can be easily read top-to bottom and give you a good sense on what business functionality is invoked.

Observe how there is a good simulation of an end-user workflow, and you can see response data (e.g. the “bookingid”) “chained” into the next request multiple times.

The test-data (or in BDD terminology, the “example”) for the business scenario can be clearly viewed at the start of the test. JSON happens to be an elegant, human-readable way of concisely expressing scenario data, and Karate takes advantage of this throughout the framework.

For those who are familiar with Cucumber’s “Scenario Outline”, note that Karate also offers you the same human-friendly way of defining Examples in a tabular format. Some teams really like this way of doing data-driven test automation aligned with the BDD concept of examples. All of this can be done without the need to worry about whether a test step has to be prefixed with “Given”, “When” or “Then”.

Look Ma, No Step Definitions!

For teams that have experience with BDD tools, what surprises them the most is that Karate does not require any step-definitions to be implemented behind the scenes. Step-definition “glue code” is known to be one of the “hidden costs” of BDD tools, and Karate eliminates this layer completely. What you see in the test is all that you need to write (or read). The built-in keywords for API testing and JSON assertions take care of most of your API testing needs.

For more insights on how low-code approaches such as Karate compare to BDD, read our free e-book: Navigating the Brave New World of API Testing.

Parting Thoughts

While we focused on whether BDD is appropriate for API testing in this article, it may also help you evaluate if BDD initiatives in your organization are structured correctly, whether the right people in charge, and whether they are delivering the value that you expect.

Get to know more at karatelabs.io

Author

Peter Thomas Co-founder & CTO Karate Labs

Peter Thomas, Co-founder & CTO, Karate Labs

Peter is recognized as one of the world’s top experts in test automation. He brings 25 years of industry experience from which he has been in open source for the last 18 years. He has worked at Yahoo and Intuit. As part of the API platform leadership at Intuit, Peter created “Karate” the open-source solution unifying API, UI & Performance testing. Peter was one of only 15 chosen by GitHub for a grant in India 2021. He co-founded Karate Labs Inc in Nov’21 to accelerate the adoption of Karate with the mission of making test automation fun and collaborative. Karate Labs is a Y Combinator backed company.

Karate Labs is a Platinum Partner at EuroSTAR 2023. Join us at Antwerp Zoo June 13-16, in a 4 day celebration of testing. Learn from 68 expert speakers and connect with your peers at Europe’s Best Testing Event.

Filed Under: Development Tagged With: 2023, EuroSTAR Conference

Is test automation a first-class citizen of your development pipeline?

January 18, 2023 by Lauren Payne

Thanks to Karate Labs for providing us with this blog post.

A key point that many teams fail to ponder: “Is test-automation a first-class citizen of your development pipeline?”

This is the second part of a series of articles covering the finer aspects of test automation in detail. Read the first part here: The Test Automation Capability Map.

What happens when Developer Experience is not prioritized? Does this situation (see picture below) look familiar where you use a separate tool and workflow for authoring test-automation artifacts?


Let us zoom in on three of the Test-Automation capabilities which fall under the category of Developer Experience.

IDE Support

The development team spends the most time within their IDE of choice (e.g., IntelliJ, Visual Studio Code etc.). If test-automation requires a completely different tool, user-interface, and workflow, this has several implications for your team:

  • Switching between tools takes developers out of their “flow state.”
  • Developers are less likely to run tests before checking-in or merging code. This leads to an inefficient feedback loop, where failures are detected only when tests are run later.
  • Developers are less likely to contribute and maintain tests. This results in dysfunctional teams where developers “chuck things over the fence” to the “QA team”. It is very common to find developers and QA teams operating in silos where bugs that “escape” to production result in finger-pointing and blame-games.

Self-Hosted or On-Prem

An aspect often overlooked is whether any sensitive data is leaving your safe-zone and coming to rest beyond your organization’s security perimeter or firewall. If you use a SaaS tool that is not self-hosted, this typically is the case.

Even though integration and end-to-end testing environments should ideally use fake or synthetic data, there will be cases where “production-like” data will be needed to simulate real business-scenarios. Many teams extract a “cut” of production data for a staging environment with some data-masking or sanitizing applied. Equally or more important than test-data is test-configuration, such as database-secrets, authentication-tokens, and passwords.

Given that a lot of teams operate in the public-cloud even for pre-production environments (e.g., AWS, Azure and GCP) it is even more important that the greatest care is taken to protect not just your test-data, but the configuration and locations and URLs of test-servers and other infrastructure. So, the critical question you should ask – Are my tests being stored in somebody else’s cloud?

Version Control, History, and Diffs

Self-hosted or not, the situation worsens if your test-automation artifacts are being managed in a separate tool or repository. The implications of having tests in a separate tool and workflow are understated, but significant

  • You will lose the ability to eyeball the changes in your tests, side-by-side with the corresponding changes to code.
  • There is less pressure (or no “forcing function”) to add or edit tests when code-changes are made. This discipline can make the difference between a high-performing team or one that lacks confidence to ship as often as needed.
  • Tests are the best documentation of “what the system does.” When you cannot see the history of changes to tests side-by-side with the code commit history, you lose a valuable chunk of documentation, and you are left with an incomplete picture of how the software evolved.
  • Your Continuous Integration job becomes more complex at the point where tests must be run. Instead of getting code and tests in one atomic “checkout” or “git clone” operation, you are forced to perform an extra step to download the tests from somewhere. Keep in mind that you also need to ensure that the version of the tests corresponds to what is being tested.

In addition, if you are using a tool with a “no-code” UI to author tests, it is quite likely that you lose the basic ability to see diffs and history even just for your tests. Some tool vendors have ended up having to build version-control into their user-experience, re-inventing what comes naturally to teams that use Git to collaborate.

Some things are best expressed as code. At the end of the day, everyone agrees that code-diffs are a superior Developer Experience.

To summarize, for your test-automation to be a first-class citizen of your development pipeline you need to use a tool that integrates into the team’s IDE of choice and stays close to the code being tested. The result of that is shown below.

An interesting observation is that: this is exactly the developer experience you expect for unit-tests!


So which kind of team do you want to be in?
Happy Testing!

Author

Peter Thomas Co-founder & CTO Karate Labs

Peter Thomas, Co-founder & CTO, Karate Labs

Peter is recognized as one of the world’s top experts in test automation. He brings 25 years of industry experience from which he has been in open source for the last 18 years. He has worked at Yahoo and Intuit. As part of the API platform leadership at Intuit, Peter created “Karate” the open-source solution unifying API, UI & Performance testing. Peter was one of only 15 chosen by GitHub for a grant in India 2021. He co-founded Karate Labs Inc in Nov’21 to accelerate the adoption of Karate with the mission of making test automation fun and collaborative. Karate Labs is a Y Combinator backed company.

Karate Labs is a Platinum Partner at EuroSTAR 2023. Join us at Antwerp Zoo June 13-16, in a 4 day celebration of testing. Learn from 68 expert speakers and connect with your peers at Europe’s Best Testing Event. Book your tickets by Jan 31 and save 15% or book your team and save up to 40%.

Filed Under: Development, Test Automation Tagged With: 2023, Test Automation

How to become a top testing expert

June 24, 2020 by Fiona Nic Dhonnacha

At some point this question will come to all of us in the industry: how do I become a top tester? My answer is this:  to become a top testing expert you need to be curious, a quick learner, and possess some technical skills.

Considering the IT evolution towards low-code solutions and increasing (right-first-time) time-to-market improvement challenges, for every minute a developer spends doing some coding, a higher volume of testing coverage is needed (and quickly!).

Since this isn’t achievable by increasing headcount, the only way is to use all automation and artificial intelligence capabilities by accelerating quality delivery. The engineer needs to ensure all pieces are fully functional, and ready to be part of any end-to-end business process… that will work E2E!

For many engineers solving development challenges at unit or system level, they can simply stay there – at unit or system level. However, the testing engineer must make sure that the unit does everything it should – and doesn’t do anything it shouldn’t – at necessary speed and security levels, as part of a bigger applicational and business process architecture.

Therefore, the learning skills and curiosity to go beyond borders will become key for success.

What if I can only pick one skill, either from soft skills or technical skills, that is key to become a top tester?

Then, I would say excellent communication skills are essential (including negotiation and conflict management). Why was this not in the initial top three skills that I’ve mentioned before? Simply because in this world of high demand for IT change, the need for functional end-to-end validation will increase. Robots will not be able to do it on their own (at least the fit-to-purpose) because it is something complex.

software testing graphic

Humans asking questions, changing their mind (according to the business needs), going through any lifecycle methodology and being delivered exactly as the human wants, are some of the factors that only humans understand. It will be extremely difficult, even impossible, for a machine to learn.

But doesn’t this sound contradictory? Yes, it does! And it is also the proof that the testing professionals and all other QA related jobs will have a massive increase in the coming years: while we’ll see low-code increasing, some other “Ultra High Even Less Code” approaches are also appearing, and it will require large amounts of testing effort, more and more.

So why should technical and curious professionals be ready for this challenge? The demand for quality assurance will increase, and without those professionals, it will be impossible to achieve everything, without technology to validate technology.

Looking for more ways to learn how to become a top tester? Check out the EuroSTAR 2020 Online programme.

——————————————————————————————–

Eduardo Amaral, Quality Management Senior Manager at NoesisAuthor: Eduardo Amaral, Quality Management Senior Manager at Noesis

Filed Under: Development, EuroSTAR Conference

  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us