• Skip to main content

EuroSTAR Conference

Europe's Best Software Testing Conference

  • Programme
    • 2024 Programme Committee
    • 2023 Programme
    • 2023 Speakers
  • Attend
    • Venue & Hotels
    • Bring your Team
    • Testimonials
    • Why Attend
    • EuroSTAR Awards
    • Academic Discount
  • Partner
    • Download our EXPO Brochure
    • Partner Opportunities
    • Partner Testimonials
  • About
    • About EuroSTAR
    • FAQ
    • EuroSTAR Timeline
    • EuroSTAR Blog
    • Supporting Organisations
    • Contact Us
  • Book Now

EuroSTAR Conference

No-Code & Low-Code: The Inclusive and Effective Way to Test Automation

July 31, 2023 by Lauren Payne

Thanks to Maveryx for providing us with this blog post.

The field of software testing has seen significant happenings in recent years, with the emergence of new testing methodologies, tools, and techniques. One of the most relevant trends in automated testing is codeless (no-code and low-code) testing, which enables users without or with low programming skills to create and execute automated tests without writing a single line of code.

Automated testing has traditionally been highly technical, requiring specialized skills and expertise in programming languages and testing frameworks. Unfortunately, there need to be more testers to do this job. For this reason, we assisted in the progress of the codeless testing, where non-technical (business) users can participate in the testing process.

Low-code testing typically provides users with a visual interface that enables them to generate automated test scripts by dragging and dropping components (test code snippets).

Fig. 1: Low-Code IDE

No-code testing tools provide users with predefined keywords that enable them to create tests using natural language, like writing a document.

Fig. 2: No-Code by Keywords in Excel

So, what are the benefits of a codeless approach to automated testing?

Productivity: no-code and low-code testing enable users to create tests quickly and easily, in most cases, without writing a single line of code. These increase productivity and reduce the time and effort required to develop the tests.

Reduced costs: no-code and low-code testing eliminate the need for specialized testing resources, such as expert testers or programmers, which can significantly reduce the costs associated with software testing. Also, they significantly reduce the time to create tests; saving time means saving money.

Faster time-to-market: more people involved in software testing and more (automated) tests enables organizations to test their software quickly, reducing time-to-market and increasing the speed of delivery.

Extensive functional coverage: codeless testing allows organizations to write more tests faster, thus improving their test coverage and enabling more frequent and extensive testing, which can help identify defects and issues earlier in the development process.

Easier maintenance: codeless testing, but more in particular no-code testing, makes it easier to maintain tests over time, with users able to update and modify tests using natural language without the need for coding expertise or specific technologies.

Easier collaboration: in particular, no-code testing tools enable teams to collaborate more effectively, with non-technical team members able to contribute to the testing process without requiring specialized skills or the knowledge of a specific technology. Also, no-code testing allows non-technical stakeholders to participate in the testing process, enabling a broader range of users to contribute to software testing.

Combining a codeless approach with intelligent object recognition at runtime technology (without GUI maps, object/image repositories, code instrumentations, recorded actions, and so on) can further boost test automation.


For example, the Maveryx Test Automation Framework offers both codeless test creation and runtime inspection. Users can create No-code automated tests by Keywords. For example, everyone using Excel can participate in test automation.

Fig. 3: No-Code test creation

Also, this framework provides low-code blocks programming IDE, supporting testing through the drag-and-drop of visual blocks.

Author

Alfonso-Nocella-Maveryx

Alfonso Nocella Co-founder and Sr. Software Engineer at Maveryx,

Alfonso led the design and development of some core components of the Maveryx automated testing tool. He collaborated in some astrophysics IT research projects with the University of Napoli Federico II and the Italian national astrophysics research institute (INAF). Over the decades, Alfonso worked on many industrial and research projects in different business fields and partnerships. Also, he was a speaker at several conferences and universities.

Today, Alfonso supports critical QA projects of some Maveryx customers in the defense and public health fields. Besides, he is a test automation trainer, and he takes care of the communication and the technical marketing of Maveryx.

Maveryx is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Test Automation Tagged With: 2023, EuroSTAR Conference

Testing and QA Key to Cloud Migration Success

July 27, 2023 by Lauren Payne

Thanks to iOCO for providing us with this blog post.

In the global rush to go serverless and in the cloud, many organisations neglect quality assurance and testing – an oversight that can seriously impair performance and increase organisational risk.

There are numerous reasons for this, but a key one is that cloud migrations are complex projects usually managed by infrastructure teams. Those tasked with driving it aren’t always quality focused, and their views of what QA is might differ significantly from what QA should be.
Should the organisation neglect thorough testing as part of its application cloud migration plan, the smallest mistake left undiscovered, could cause major failures down the line.

Lift and shift migration, the most popular approach and the second-largest cloud services sector by revenue, should not be seen as a simple copy-and-paste operation. Without a concerted effort, accurate planning and coordinated migration testing, a copy-and-paste approach could have devastating consequences for scalability, databases, and application and website performance.

Cloud Migration Testing and QA Priorities and Pillars

Thorough cloud migration testing uses quantifiable metrics to pinpoint and address potential performance issues, as well as exposing opportunities to improve performance and user experience when applications are in the cloud. However, teams should be cautious of scope creep at this stage – adding new features during migration could have unforeseen impacts.

Proper testing and QA rests on four key pillars – security, performance, functional and integration testing.

Security testing must ensure that only authorised users access the cloud network, understanding who has access to the data, where, when and why users access data. It must address how data is stored when idle, what the compliance requirements are, and how sensitive data is used, stored or transported. Suitable procedures must also be put in place against Distributed Denial of Service (DDoS) attacks.

To realise the performance and scalability benefits of the cloud, testing must validate how systems perform under increased load. Unlike stress testing, performance testing verifies the end-to-end performance of the migrated system and whether response times fulfil service level agreements under various load levels.

Functional validates whether the application is ready to be migrated to the cloud, and whether it will perform according to the service level agreement. In complex applications, it is necessary to validate the end-to-end function of the whole application and its external services.

Even in basic applications where microservices architecture is not required, we see some sort of integration with third-party tools and services, making integration testing important. Therefore, cloud migration testing should identify and verify all the dependencies to ensure end-to-end functionality, and should include tests to verify that the new environment works with third-party services, and that the application configuration performs in a new environment.

With well-architected testing carried out, the organisation can rest assured that cloud migration risks have been mitigated and opportunities harnessed across security, operational excellence, reliability, performance efficiency, cost optimisation and sustainability.

A Testing and QA Framework for AWS Cloud Migration

As an AWS certified partner provider, iOCO has tailored our Well Tested Cloud Framework (WTCF) for cloud migration to align with the AWS Well Architected Framework, to ensure customer migrations to the AWS cloud are not only successful, but actually exceed expectations. iOCO resources will lead and manage execution from initial assessment, risk identification and recommendations; through a comprehensive set of checklists and guidelines across each of the four QA pillars; to full migration testing.

In tandem with the AWS Well Architected Framework, iOCO’s WTCF is designed to fast-track AWS migration testing using clear and structured guides and processes and customised options to suit the organisation’s budget and needs.

Author

Reinier Van Dommelen, Principal Technical Consultant – Software Applications and Systems at iOCO

As a seasoned Technical Consultant with a wealth of experience, Renier Schuld has a proven track record of delivering successful IT projects for a diverse range of clients. He excels at bridging the gap between business and technical requirements by identifying and implementing systems solutions, guiding cross-functional teams through the project life-cycle, and ensuring successful product launches.

Renier’s expertise in Testing is extensive and includes developing functional specification documents, designing test strategies, creating and executing test scripts to ensure accuracy and quality, developing project and organizational software test plans, providing user support, and building automated test frameworks. He has a passion for continuously improving processes and ensuring that quality is always top of mind throughout the project life-cycle.

iOCO is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Quality Assurance Tagged With: 2023, EuroSTAR Conference

Moving Beyond Traditional Testing: The Need for Autonomous Testing in Software Development.

July 24, 2023 by Lauren Payne

Thanks to Hexaware for providing us with this blog post.

Software testing is struggling to keep up with the fast-paced and constantly accelerating rate of releases. According to a survey by Gitlab in 2022, seven out of ten developers reported that their teams release code at least every few days, with many doing so on a daily basis. In today’s world, customers and end-users expect new features and functionality at an increasingly rapid pace. Companies that lag behind on new software releases risk being displaced by competitors who can keep up with the latest updates.

When testing fails to keep up with the release pace, organizations face well-known risks associated with releasing software that hasn’t been adequately tested and may contain bugs. For instance, in July 2022, former Volkswagen CEO Herbert Diess was forced out of the company because the automaker’s software unit was unable to produce software of sufficient quality, delaying the launch of its new Porsche, Audi, and Bentley models. Even more recently, in October 2022, Nintendo had to take Nintendo Switch Sports’ servers offline for nearly a week due to a bug that caused the game to crash.

Development teams have attempted to address this dilemma of either issuing potentially buggy software faster or slowing down to test sufficiently with test automation. However, there are significant challenges associated with how test automation is traditionally implemented, and automation still requires highly skilled testers who are always in high demand, making them difficult to hire and retain.
Testing organizations face challenges beyond just automating the creation of tests. Maintaining tests is equally challenging as automation scripts can become outdated and fail to test the required functions in the desired ways. Even with enough testers available, analyzing the impact of changes and configuring the test suite is too complicated to be performed manually. However, the problem extends beyond maintaining automated tests as human analysis cannot identify all areas that require testing.

To overcome these challenges, organizations need to move beyond automation and embrace autonomous testing.

AI-Powered Autonomous Testing

Autonomous testing is the solution to the challenges faced by testing organizations as it enables faster decision-making about which scenarios to test based on the impact of a change without relying too much on human involvement. This dramatically increases testing depth and scope while simultaneously speeding up the process.

In contrast, traditional test automation only addresses one stage of the testing process, which is the automated script execution in the DevOps pipeline, as illustrated in Figure 1.

Traditional Testing Process
Automation Beyond the DevOps Pipeline

Autonomous testing has the potential to significantly reduce the need for human involvement throughout the testing process (as shown in Figure 2), unlike traditional test automation, which only impacts script execution in the DevOps pipeline (as shown in Figure 1). By utilizing natural language processing (NLP) and machine learning (ML) technologies, organizations can automate the generation of feature files and autonomous scripts. With the addition of deep learning through a support vector machine (SVM), tests can be auto-configured, and cases can be identified for execution when there are changes to code or requirements. Autonomous testing can also perform failure analysis and take corrective action.

As the AI continues to learn from development behavior, test results, and other data, it becomes smarter and more accurate. For example, post-production logs are rarely used, but AI can analyze them and match them to post-production to identify previously unidentified “white spaces” that are likely to have bugs in the future and therefore require testing.

It is crucial to understand that autonomous testing is not a one-time fix, but a continual process of improvement, one case at a time. Organizations can start by identifying a specific bottleneck in the testing process that autonomous testing can address, such as the generation of UI/API scripts or identifying sensitive columns that require masking or synthetic data replacement. Ideally, the case should involve different functions for a particular phase to have a more significant impact. Once the solution has been successfully implemented and shown results, organizations can leverage that success to expand to a new case and continue to improve their testing process over time.

Think of it in terms of autonomous driving. Automakers first rolled out discrete capabilities such as automatic breaking to avoid hitting a stationary object, lane assist, and adaptive cruise control. Implementing autonomous testing requires a similar approach.

Organizations are under pressure to conduct extensive testing within a shorter time frame and with fewer resources, all while delivering high-quality software on schedule. Autonomous testing, powered by AI and ML, can help organizations achieve this goal, but it requires a strategic, long-term approach to implementation. The ultimate outcome is that development teams can release new features more frequently, leading to a better customer experience and a stronger bottom line for the organization.

Learn More

Listen to a Thoughtcast that answers key questions about autonomous software testing and explainshow to move seamlessly from automation.

Reach us at [email protected] for more information.

Nagendra BS, Vice President – Digital Assurance, Practice & Solutions at Hexaware

Nagendra has around 21 years of experience in Software industry and is passionate about Quality and Testing and have helped number of customers in their testing transformation journey. He is currently responsible for Go to Market function of the Digital Assurance (Testing) business which includes creation of all service offerings, global presales support, alliances, analyst and marketing functions for Digital Assurance services.

Hexaware is an EXPO exhibitor at EuroSTAR 2023

Filed Under: Development, Test Automation Tagged With: EuroSTAR Conference

We’ve got the Stage – You’ve got the Story

July 17, 2023 by Lauren Payne

The 2024 EuroSTAR Software Testing Conference is going to Stockholm, Sweden.

If you’ve ever wanted to speak at EuroSTAR and share your story on Europe’s largest stage, the Call for Speakers is open until 17th September.

Now is the time to start thinking about what you’d like to share. What experiences will help others in the room. Perhaps it’s something that didn’t work at first but then you found a solution. It might be technical, or it might core skills.

EuroSTAR 2024 Programme Chair, Michael Bolton, is inviting you to explore the theme, ‘What Are We Doing Here?’ – it’s a wide-open question, with lots of possible interpretations and related questions.

Talk Type

We’ll share more on these later but for now, there will be three main types of talks:

  • Keynote – 60mins (45mins talk + 15mins Q&A)
  • Tutorials/Workshops – Full-day 7 hours OR Half-day 3.5 hours incl breaks
  • Track Talks – 60mins (40mins talk + 20mins valuable discussion)

Who?

Calling all testing enthusiasts and software quality advocates – whether you’re a veteran, or new to testing – to share your expertise, successes (and failures) with your peers; and spark new learnings, lively discussions, and lots of inspiration.

Think about what engages you in your work, engrosses you in testing, challenges you’ve faced, or new ideas you’ve sparked? Get in front of a global audience, raise your profile, and get involved with a friendly community of testers.

Here’s everything you need to know about taking the first step on to the EuroSTAR stage.

We invite speakers of all levels to submit their talk proposals and take the biggest stage in testing!

What Do I Need To Submit?

A clear title, a compelling abstract and 3 possible learnings that attendees will take from your talk – this is the main part of your submission. We’ll ask you to add in your contact details and tick some category boxes but your title, talk outline & key learnings are the key focus.

Topics for EuroSTAR 2024

Michael is calling for stories about testers’ experiences in testing work. At EuroSTAR 2024, we embrace diversity and value a wide range of perspectives. We’re most eager to hear stories about how you…

  • learned about products
  • recognised, investigated, and reported bugs
  • analysed and investigated risk
  • invented, developed, or applied tools
  • developed and applied a new useful skill
  • communicated with and reported to your clients
  • established, explained, defended, or elevated the testing role
  • Created or fostered testing or dev groups
  • recruited and trained people
  • made crucial mistakes and learned from them
START Your Submission

Mark Your Calendar

Here are some essential dates to keep in mind:

  • Call for Speakers Deadline: 17 September 2023
  • Speaker Selection Notification: Late November 2023
  • EuroSTAR Conference: 11-14 June 2024 in Sweden

If you’re feeling inspired, check out the full Call for Speakers details EuroSTAR attracts speakers from all over the world and we can get over 450 submissions. Each year, members of the EuroSTAR community give their time to assess each submission and their ratings help our Programme Committee select the most engaging and relevant talks. If you would like help writing a proposal see this handy submissions guide and you can reach out to us at any time.

EuroSTAR 2024 promises to be an extraordinary experience for both speakers and attendees. So, submit your talk proposal before 17 September 2023 and let’s come together in the beautiful city of Stockholm next June. Together we’ll make EuroSTAR 2024 an unforgettable celebration of software testing!

Filed Under: EuroSTAR Conference, Software Testing, Uncategorized Tagged With: EuroSTAR Conference

How to calculate whether QA tests should be Automated or Manual

July 13, 2023 by Lauren Payne

Thanks to Global App Testing for providing us with this blog post.

In a recent webinar with the easy CI/CD tool Buddy Works, we looked at how businesses can calculate the true cost of testing and use it to determine whether tests should be automated or manual. You can check out our thinking on the subject below .👇

Why do businesses believe they will automate so many tests?

In TestRail’s first annual survey in 2018, businesses set out their plans for test automation. The 6,000 respondents automated 42% of their tests and planned to automate a further 21% next year. 

But they didn’t. In the 2019 survey, the same 42% of tests were automated, and this time, businesses said they would automate 61% in 2020. By the most recent survey in 2021, just 38% of tests were automated. By now, the pattern is consistent. Businesses systematically overestimate the amount they will automate. 

But why?

Why businesses like test automation

Teams tend to like the idea of automating tests. That’s because:

  • You can run automated tests whenever you like
  • Automated tests return results instantly
  • Automation is perceived as a one-time investment, which would make it cheaper to automate over the long term. (In our experience, this is only sometimes true.) 

And then together these factors lead to even better second-order effects: 

  • You can remove bottleneck slowing down your releases if your tests are instant 
  • You can improve your DORA metrics as you measure your progress. 

But the reality of testing difficulty belies this. We ran a survey during a separate webinar about the top reasons businesses felt they couldn’t automate more tests. And here’s the TLDR: 

  • The top result (28%) of respondents cited flaky tests due to a changing product. The second result (26%) is not enough time to automate.  
  • Both answers are time. “Flaky tests due to a changing product” really refers to the time investment of maintaining your tests. “Not enough time to automate” refers to the time investment of setting them up.
  • Businesses are underequipped to calculate the time costs of building and maintaining tests, or the other time demands which will be made of them in the cut and thrust of product development. 

What’s the equation to calculate whether a manual or automated test is better?

setuptime1

ST + (ET x N) = the true time cost of testing.

You can check this for automated and manual tests to identify whether it’s cheaper for your business to execute a test manually or to automate it. 

ET is the execution time. We know that automation is much faster here, and it’s the main metric businesses focus on when they want to automate all their tests. For Global App Testing, we offer 2-6 hour test turnaround with real time results. Tests land in tester inboxes straight away, so in many cases the first results come through much faster.

ST is the setup time including any maintenance time investment. It takes more time to automate a test script than it does to quickly test something or to send it to a crowdtester like Global App Testing. Setup time is also the second barrier to setting up tests, so it’s worth running this algorithm twice – one to add up which is more expensive, and one with adapted algebra to calculate the maximum time your business can invest in one go. 

N is the number of times a test will be used before it flakes. It’s great that execution on an automated test is very rapid; but the saving is immense on a test used 1000s of times. If the test will be used twice before it flakes, the return is less impressive.

A final note is to ensure you know what you’re optimizing for. Is time or money more important? The labour costs of the individuals setting up the automated test (developers) versus the labour costs of individuals executing tests (global QA professionals) could be different; and try running this algorithm with both units plugged in/.

Author

Adam Stead

Adam is the editor-at-large at Global App Testing. He has written extensively about technology business and strategy for a variety of businesses since 2015.

Global App Testing is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp.

Filed Under: Test Automation Tagged With: 2023, EuroSTAR Conference

5 Steps to help build your load testing strategy

July 10, 2023 by Lauren Payne

Thanks to Gatling for providing us with this blog post.

You might have already started load testing, which is awesome! But if you haven’t, and you’re wondering when, where and how to start the answers are all here for you. To help you get set up we’re going to give you a few tips and tricks to build your load testing strategy and make sure that you’re set for success. Ready to dive in? Read on!

Know Your User

The most important part of load testing is knowing your user but more specifically what you need to know are the answers to a few key questions.

How are your users using your site/application? 

Most enterprises have an idea of how they’d like their users to use their site or products but for many how they’re actually using it and the journeys they take when they’re using it are a bit of a mystery. By using different tracking software such as Mixpanel or Amplitude though you can get a very detailed idea of what journeys your users are taking on your site and craft simulations to match and replicate this.

Understanding Your Traffic

Crafting great user journeys is the first step in building a scenario. Understanding your traffic though will help you decide what kind of tests you need to create. By using tools like Google analytics, Google Search Console, SEM rush or just monitoring your server usage you should be able to get an idea of what kind of traffic you’re receiving and how you’re receiving it. Are you getting sudden surges of traffic? Run a stress test! Are you getting long durations of constant traffic? Run a soak test. For every traffic scenario you can run a battery of different tests to ensure that your website is resilient enough to withstand the traffic it’s receiving. To learn more about the different kinds of load tests you can run and get an idea about what might work best for you check out our post here.

Continuous Integration

You’ve built your tests and run them, you’re doing great! However, most websites and applications are constantly changing and upgrading. How can you be sure that the changes you’re making aren’t going to change the performance of your project? By introducing load testing into your CI/CD project. We wrote a detailed post on the benefit of using Gatling Enterprise Cloud to integrate load testing into your CI/CD process. Gatling’s Enterprise version allows you to integrate with almost any CI/CD software, whether you’re using one of our dedicated integrations or using our CI Script to create your own.

Plan For The Unexpected

One of the great things about load testing is its ability to prepare you for any eventuality. You might not have thousands of users hitting your application today but by creating tests, and running them you can be sure that if it does happen you’re prepared. So when creating your testing strategy and examining your traffic it’s important not to just consider what is happening right now but also what could happen. What’s the worst/best case scenario? Are you prepared? Make sure by testing and you’ll know that whatever happens you’ll be ready.

By following these tips, this will help ensure that your websites and applications are able to handle the traffic and workloads that they will encounter in the real world, and it will help prevent performance issues that could impact the user experience. 

LINKS
or just monitoring your server https://hubs.ly/Q01DYDSv0 
sudden surges of traffic?https://hubs.ly/Q01DYH_L0 
out our post herehttps://hubs.ly/Q01DYK9B0 
Gatling Enterprise Cloud to integrate load testing into your CI/CD process.https://hubs.ly/Q01DYL720 

Author

Pete Dutka, Customer Success Manager, Gatling.

Gatling Enterprise provides advanced features to help you get ahead of downtime and technical issues related to your website traffic. Our advanced reports allow you to dive into the details and discover your application’s limits and performance bottlenecks. We offer both on-premise and SaaS solutions to meet your business needs, whatever they may be.

Gatling is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Software Testing Tagged With: 2023, EuroSTAR Conference

Testing in Agile: A Few Key Points to Consider

June 28, 2023 by Lauren Payne

Thanks to CTG for providing us with this blog post.

What is Agile Testing?

This may come as a surprise, but there really is no such thing as “Agile Testing.” Let’s break this down into two terms: Agile and Testing.

Agile is a development approach that has been widely adopted since the early 2000s, whereas Testing is a process that determines the quality of a software product. The basic principle in software testing is that testing is always context dependent. In other words, you must adapt your process, activities, and objectives in order to align them with your business context.

How Does Testing in Agile Differ From Traditional Approaches?

The main difference between an Agile approach and a more traditional approach with respect to testing lies in the ever-changing, fast-paced, and continuous character of testing.

With Agile, the objective is to deliver value as fast as possible to the stakeholders. Since an Agile approach embraces change, the concept of value itself can change between sprints or iterations.

When traditional approaches are applied, there is always a period between the analysis and test execution phases where developers are performing their magic. During this period, testers review, evaluate and analyze the documentation at hand, trying to prevent any defects from entering the code as well as preparing their test case design.

In an iterative or incremental approach, such as Agile, such a period does not exist. Every member of the Agile team is considered multi-disciplinary and must therefore be able to perform any tasks within the team. It simply does not matter whether it’s analysis, development, or testing. Given the lack of time to prepare test cases upfront, testing becomes less scripted and more explorative.

Finally, due to the circular motion, a lot of the testing work is redundant. In a traditional approach, the code is stable and frozen when testing starts. As a result, a test that passed 4 weeks ago should still pass.

In an Agile approach, requirements, user stories, product backlog items (PBI), may undergo significant changes in between iterations, based on customer feedback. To ensure that new functionalities do not break the existing solution, rigorous regression testing is required within every iteration, lowering the bandwidth for testing new functionalities.

What Skills do Testers Need & What Roles do they Play in Agile Projects?

Whether its an Agile approach or a traditional approach, the skills that testers need are largely identical. We can organize these testing skills into 4 categories:

  • Business or domain knowledge: Understanding the context of the work or project
  • IT knowledge: General understanding of all the other roles and activities
  • Testing knowledge: How to deduct test cases and how to execute them.
  • Soft skills: How to deduct analytical skills, communication skills, empathy, and a critical mindset.

In fact, testers should feel more at home in an Agile team, as they are more in control. Testers can pull work from the backlog when they are ready as opposed to a traditional approach, where work is pushed to them whether they are ready or not.

What is the Best Way To Assess Quality Risks?

When it comes to Agile, collaboration and communication are key. Every requirement contains a risk to the product. By writing and reviewing the requirements together (i.e. collaborative user story writing) with developers, analysts and testers, all stakeholders are made aware of possible risks.

It is important to note that not all risks carry the same weight and mitigating them can occur through different means. Lower-level risks associated with a specific product backlog item (PBI) can be addressed in its acceptance criteria. Product risks on a higher level than a single user story can be mitigated in quality gates such as the definition of ready (DOR) and the definition of done (DOD).

The same principle applies for estimation of time. However, Agile team members do not estimate the time required to perform a certain task. Due to the risk of anchoring, it is better to assess tasks on a PBI level using story points. These fictive, relative, values express the total effort required by the entire team to complete the task. It’s not the sum of analysis, development, and testing in the most favorable circumstances, but rather the team’s evaluation of how much effort the task would require for any given team member to complete.

3 Ways to Enhance your Understanding in Agile Projects

Like anything in life, improving your understanding in Agile projects requires deliberate actions. Here are 3 ways you can enhance your knowledge:

  • Join an Agile team

Practices makes perfect. Joining an Agile team is a great way to gain valuable exposure and experience to Agile principles in order improve one’s understanding and proficiency.

  • Follow Agile trainings

Regardless of your field or profession, learning should never stop. Participating in Agile trainings can allow you to learn more about Agile, which you can then apply in the real world.

  • Read great Agile resources

Finally, it is never a bad idea to pick up any of the great literature about Agile. Perhaps less interactive than the first two suggestions, reading about Agile makes it possible to learn from some of leading Agile specialists.

Interested in expanding your agile skills, experience, or know-how? CTG Academy offers both in-person and online training dedicated to help those working in agile. Discover our agile trainings and take your projects to the next level.

Want to know more on agile? Discover our agile service or contact us!

Author

EuroSTAR 2023 Michaël Pilaeten

Michael Pilaeten , Learning and Development Manager

Breaking the system, helping to rebuild it, and providing advice and guidance on how to avoid problems. That’s me in a nutshell. With 17 years of experience in test consultancy in a variety of environments, I have seen the best (and worst) in software development. In my current role as Learning & Development Manager, I’m responsible for guiding our consultants, partners, and customers on their personal and professional path towards excellence. I’m chair of the ISTQB Agile workgroup and international keynote speaker (United Kingdom, France, Spain, Peru, Russia, Latvia, Denmark, Armenia, Romania, Belgium, Holland, Luxembourg).

CTG is an EXPO exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Agile Tagged With: 2023, EuroSTAR Conference

Reflections on EuroSTAR 2023 by Gek Yeo

June 27, 2023 by Joseph Cesín

Gek Yeo is a Sap Quality Expert who joined us an the official EuroSTAr Reporter for our 2023 EuroSTAR Conference in Antwerp. In this blog Gek shares her second day experience at Europe’s biggest software testing conference.

Luckily, this time, I managed to find some time to attend the two main Thursday #EuroSTARconf 2023 keynotes. The first keynote is Maaret Pyhäjärvi ‘s keynote titled “Whose Test is It Anyway?”. One point that stood out to me was the legal or ethical aspect. Even if it is legal, is it ethical to use computer-assisted software development? I listened to my peer Maaret and could relate to her viewpoint on the GitHub Copilot.

As we move onto AI-assisted technologies, it becomes a question of how future generations will use these technologies for good or bad. The definition of what is good for one may seem bad to others. 23 years ago, I had already started with AI and used it for my projects. I always remember what my university professor said: it takes 20 years for the industry to adapt to what we wrote in our thesis. And now, the industry has embraced AI in testing. The debate on the usage of AI technologies has started to spread in our daily lives.

Regarding the key notes seen in this picture, I agree with Maaret when she said, “We are accountable.” We have responsibilities to ensure that the rules for domains, both for laymen (whom I would refer to as citizen developers) and experts, are well-defined. The testing process should belong to everyone, not just the tester or the developer. Another takeaway is on exploratory testing. I love exploratory testing. It’s not an excuse to do manual testing. In fact, Maaret highlighted the points that exploratory testing helps with automation and it brings people together, not siloed testing. I have witnessed this in action during my test release cycle with my colleagues.

I organize bug bashing events in my teams for exploratory testing so we can have fun while testing. It’s about the team and the people, not just software releases. We take pride in finding bugs before our customers do in our releases.

The second keynote I attended is titled “The Cyber Security of Smart ‘Adult’ Toys or Lack of It” by Jo Dalton. It was originally meant to be delivered by Ken Munro and Jo Dalton, but unfortunately, Ken couldn’t attend. Kudos to Jo for delivering this SFW (safe for work) keynote all by herself! I have kindly shared some pictures of the slides from this talk here. Without security experts like Jo, we would probably giggle this topic off and treat the lack of security testing in connected adult toys as the elephant in the room.

Jo Dalton EuroSTAR 2023 Keynote Speaker

By speaking at a global conference, we become more conscious of the business needs for security testing and why we desperately need regulations for IoT (Internet of Things). Seriously, we should not let the backdoor through the backdoor happen, as Jo highlighted in her keynote. I hope that, as a QA community, more folks will speak up and contribute to security testing for IoT.

See Call for Speakers

Conferences like EuroSTAR provide valuable opportunities for continuous learning, networking, and staying up-to-date with industry trends. It is important for us as testers to self love and find time to attend high-quality conferences like EuroSTAR 2023 to broaden our perspectives, share experiences, and build new friendships and connections within the IT community such with Peter Thomas, Kapil Bakshi from Karate Labs and Humza Ahmad. And it is also beneficial when employers support their employees in attending such events, as it demonstrates a commitment to their professional development and encourages skill-set improvement. I hope to see some of you in EuroSTAR 2024 again.

The EuroSTAR Conference has been running since 1993 and is the largest testing event in Europe, welcoming 1000+ software testers and QA professionals every year. Don’t forget to connect, like Gek Yeo, with the testing community at our next events.

Filed Under: EuroSTAR Conference Tagged With: EuroSTAR Conference

  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • …
  • Page 10
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us