• Skip to main content
Year End Offer: Save 30% + Teams Save Up to an Extra 25%!

EuroSTAR Conference

Europe's Largest Quality Engineering Conference

  • Programme
    • Programme Committee
    • 2025 Programme
    • 2025 Speakers
    • Community Hub
    • Awards
    • Social Events
    • Volunteer
  • Attend
    • Location
    • Highlights
    • Get Approval
    • Why Attend
    • Bring your Team
    • Testimonials
    • 2025 Photos
  • Sponsor
    • Sponsor Opportunities
    • Sponsor Testimonials
  • About
    • About Us
    • Our Timeline
    • FAQ
    • Blog
    • Organisations
    • Contact Us
  • Book Now

EuroSTAR Conference

The Hidden Crisis in Software Quality: Why Unit Tests Aren’t Enough (And What We’re Learning From 100+ Companies)

April 30, 2025 by Aishling Warde

Traditional quality assurance is failing. Despite companies investing millions in testing infrastructure and achieving impressive unit test coverage – often exceeding 90% – critical production issues persist. Why? We’ve been solving the wrong problem.

The Evolution of a Crisis

Ten years ago, unit testing seemed like the silver bullet. Companies built extensive test suites, hired specialized QA teams, and celebrated high coverage metrics. With tools like GitHub Copilot, achieving near-100% unit test coverage is easier than ever. Yet paradoxically, as test coverage increased, so did production incidents.

The Real-World Testing Gap

Here’s what we discovered at SaolaAI after analyzing over 100 companies’ testing practices:

  1. Unit tests create a false sense of security. Teams mock dependencies and test isolated functions, but real-world failures occur at system boundaries.
  2. Microservice architectures exponentially increase complexity. A single user action might traverse 20+ services, creating millions of potential failure combinations.
  3. The “No QA” movement, while promoting developer ownership, has inadvertently reduced comprehensive testing.

The E2E Testing Paradox

End-to-end testing is essential for verifying that complex systems function seamlessly, yet companies struggle with major obstacles. Setting up E2E environments can take months, while maintaining test data often turns into a full-time job. Integrating these tests into CI/CD pipelines requires specialized expertise, adding another layer of complexity.

On the technical side, flakiness remains a persistent issue, with failure rates reaching 30-40%. Browser updates frequently break test suites, while asynchronous operations and timing inconsistencies introduce further instability. These challenges make E2E testing notoriously difficult to manage.

Beyond the technical barriers, cultural resistance slows adoption. Developers often see E2E testing as solely QA’s responsibility, while product teams prioritize feature development over test reliability. When test suites fail, they are frequently ignored or abandoned rather than fixed, leading to gaps in test coverage and overall software quality.

The AI-Driven Future

Fortunately, modern solutions are emerging that leverage AI to revolutionize testing: from automated test generation based on user behavior, self-healing tests that adapt to UI changes to Intelligent test selection to reduce runtime. The future with AI is bright and looks promising.

The Way Forward

Quality isn’t just about test coverage – it’s about understanding how systems behave in production:

  1. Shift from code coverage to interaction coverage
  2. Integrate observability with testing
  3. Use ML to predict failure scenarios
  4. Automate maintenance of test suites

For too long, we’ve treated quality as a coding problem. It’s time to recognize it as a data problem. By combining AI, machine learning, and traditional testing approaches, we can finally bridge the gap between unit test success and production reliability.

The next evolution in software quality isn’t about writing more tests- it’s about making testing intelligent enough to match the complexity of modern applications.

This is the challenge that inspired SaolaAI: making quality as sophisticated as the systems we’re building. The question isn’t whether AI will transform testing, but how quickly companies will adapt to this new paradigm.

Author

Arkady Fukzon

CEO and Co-Founder, SaolaAI

Saola are exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Quality Assurance Tagged With: EuroSTAR Conference

Real-World Data vs. Fake Data: Choosing the right strategy for effective testing

April 1, 2025 by Aishling Warde

Testing environments play a critical role in software development, ensuring applications function correctly before release. To achieve this, having test data that simulates real-world scenarios is essential. However, the choice between “fake data” and “real-world data” sparks an interesting debate, as each approach offers significant challenges.

In this article, we will explore the key differences between these two types of data, analyze their benefits and challenges, and ultimately highlight how a strategic combination of both can optimize the testing process, ensuring accuracy, security, and efficiency in development environments.

What is real-world data?

Anonymized real-world data is derived from production environments, ensuring it does not contain personally identifiable information while complying with regulations such as GDPR, CCPA, LPDP, and others.

These datasets offer a high degree of realism, as they preserve referential integrity, maintain the natural complexity of real-world scenarios, and accurately reflect user behavior, system interactions, and business logic. Additionally, real-world data naturally exhibits aging, reflecting how information changes over time and capturing historical trends and patterns that influence system behavior.

By leveraging real-world data, organizations can test applications under conditions that closely resemble actual usage, improving the reliability and effectiveness of their testing processes.

What benefits do real-world data offer?

Using real-world data provides significant advantages for your organization:

  • Captures the complexity of real-world behavior, including intricate patterns, sudden fluctuations, and inherent biases while ensuring data privacy.
  • Maintains appropriate statistical distribution and frequency.
  • Preserves relationships and interdependencies between elements, allowing comprehensive “end-to-end” testing.
  • Reduces the gap between development, testing, and production environments.
  • Facilitates integration testing with other systems under production-like conditions.
  • Provides immediate availability and reusability.

Challenges of using real-world data

Working with anonymized real-world data presents challenges. Identifying the right data for each test case, anonymizing it effectively, and delivering it on-demand to the testing environment are key challenges, especially in complex and costly environments with large volumes of data. Managing real-world data requires robust tools to ensure that no sensitive information is exposed and that masking processes remain effective, as well as addressing other critical challenges in test data management.

Synthetic data

The term “fake data” or “synthetic data” is widely used across industries but lacks a universally accepted definition. Different sectors and vendors interpret this concept in various ways depending on their testing needs and available technologies. While some consider synthetic data as manually created datasets, others define it as AI-generated data, or even simply masked real data. As these variations can create confusion, understanding the most common approaches provides greater clarity about what synthetic data really means.

Some of the most common definitions include:

  • Traditionally created data: Data manually or traditionally generated using tools like spreadsheets, scripts or bussines apis. While quick to produce, it often lacks complexity, is prone to errors, and becomes costly over time.
  • AI-Generated data: Data created by AI models trained on real-world patterns. Although it can mimic realistic behaviors, its reliability remains limited for mission-critical applications. For the time being, there is no evidence of successfully using this approach for testing business support systems.

Synthetic data limitations

These approaches to synthetic data generation often fall short when it comes to accurately simulating production environments, facing critical limitations challenges such as:

  • Lack of aging: No representation of time-based changes.
  • Limited complexity: Misses intricate, real-world dependencies.
  • Absence of rare scenarios: Struggles to simulate edge cases.
  • No technical debt: Fails to reflect legacy patterns and old system quirks.
  • Unrealistic data: Lacks inconsistencies found in production.
  • Reduced data richness: Missing the diversity of real-world interactions.
  • Insufficient volume: Smaller datasets than real production environments.
  • Inaccurate data distribution: Does not replicate real-world patterns.

These gaps make these synthetic data approaches unreliable for testing environments that aim to mimic production conditions accurately.

How does icaria Technology generate high-quality synthetic data?

To overcome these limitations, icaria Technology has developed a model-based synthetic data approach that ensures realistic, secure, and scalable datasets for high-quality testing environments. This approach allows us to create high-quality test data that mirrors real-world conditions without compromising security, compliance, or performance.

Advantages of icaria Technology’s synthetic data

Our approach to synthetic data offers significant advantages for software testing environments. By replicating the structure, patterns, and complexity of real-world data while ensuring the exclusion of sensitive information, this method strikes a balance between realism, scalability, and security. Here are some key benefits of using our synthetic data:

  • Realistic test scenarios with no privacy risks
    Maintains relationships, distributions, and behaviors from real-world datasets without exposing PII. By generating this data from pre-existing models, we ensure that test environments mirror production scenarios.
  • Consistency across testing stages
    Ensures smooth transitions between development, staging, and production phases by preserving referential integrity and data relationships.
  • Scalability and flexibility
    Generates large volumes of test data tailored to specific needs, supporting extensive performance and scalability tests.
  • Customizable for testing requirements
    Allows the generation of datasets designed for edge cases, rare scenarios, or new application features.
  • Cost efficiency
    Reduces manual effort and minimizes rework costs through automated processes, ultimately saving resources during the testing lifecycle.

When to use real-world data and when synthetic data?

After reviewing what real-world data is and our definition of synthetic data, the question arises: which one should we use in testing?

Real-world data is the best option for testing due to its richness and complexity, accurately reflecting system behavior and user interactions. Since this data already exists, it is often more efficient to use it rather than generating new datasets, which can introduce additional challenges and complexities.

However, this does not mean synthetic data has no place in a robust testing strategy. In certain situations, our synthetic data approach can be particularly useful, such as:

  • When testing requires data that is not yet available in existing application environments. For instance, during new developments involving changes to the application’s data model, there will be no existing data for the new model, necessitating synthetic data generation.
  • When specific datasets are rare but essential for testing. Some scenarios occur infrequently, meaning only one or two real-world examples exist. In these cases, synthetic data can generate additional instances, ensuring all testers and developers have access to the necessary data.

The perfect combination for reliable testing with icaria TDM

In the high-complexity environments managed by icaria Technology, particularly in icaria TDM, the reality is significantly more complex. These applications function in mission-critical domains where the margin for error is nonexistent.

By combining real-world data with synthetic data, organizations can create a balanced and efficient approach to test data management that ensures accuracy, compliance, and scalability.

Choosing the right type of data for each scenario, or combining both, helps companies improve test quality, comply with regulations, and optimize resources. With icaria TDM, achieving this balance has never been easier. This approach not only enhances testing efficiency but also strengthens confidence in systems, ensuring applications meet the highest quality standards before deployment.

Author

Enrique Almohalla

Enrique Almohalla, leading icaria Technology as CEO, brings a wealth of experience in TDM methodologies, cultivated through over twenty years of directing software development and testing projects. His significant involvement in Test Data Management, marked by continuous innovation and application, underscores his deep understanding of the field. Additionally, his position as an associate professor at IE Business School in Operations and Technology melds his hands-on experience with academic insights, offering a comprehensive perspective on business management.



iCaria are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Big Data, EuroSTAR Conference Tagged With: EuroSTAR Conference

Are You Really Agile? Self-Asses If Your QA Team Is Truly Agile

March 31, 2025 by Aishling Warde

Adopting Agile practices doesn’t automatically mean your QA team is working in an Agile way. Many teams follow the structure of Agile like sprints and retrospectives, but still struggle to fully embrace Agile principles. True agility in QA is about flexibility, continuous improvement, and deep collaboration to ensure quality at every stage of development.

Many teams find themselves wondering: Are we really Agile, or just following a checklist of Agile rituals? That’s the exact question explored in our ebook, “Are You Really Agile? A Practical Guide for QA Teams.”

In this article, we’ll explore key insights from the ebook to help you assess your QA team’s Agile maturity and share practical strategies to strengthen your processes.

Signs That You’re Not Truly Agile

These are the most common indicators that suggest your team is not Agile:

1.Inflexible Processes
Agile is meant to be iterative, yet some QA teams still rely on rigid, step-by-step workflows that don’t leave room for adaptation. If your testing approach isn’t flexible enough to accommodate changing requirements, it may be limiting your agility.

2.Communication Silos
Agile emphasizes ongoing collaboration, but if QA and development teams rarely interact outside of sprint reviews or retrospectives, valuable discussions may be missed. Continuous alignment is key to delivering high-quality software efficiently.

3.Lack of Continuous Feedback
In Agile, testing and feedback should happen throughout the sprint, not just toward the end. If your team is catching defects late in the cycle rather than identifying issues early, your process might be more reactive than proactive.

4.Testing as a Separate Phase
Testing should be seamlessly integrated into development, not treated as a final step before release. If QA still operates as a standalone phase rather than being part of the sprint’s workflow, it’s a sign that your team hasn’t fully embraced Agile testing.

Take This Self-Assessment to Know If You Are Truly Agile!

To help you evaluate how Agile your QA processes really are, we’ve designed a straightforward self-assessment questionnaire. It allows you to analyze your workflows, collaboration, and testing practices to see where your team stands on the Agile spectrum.

How it works: Rate your team on a scale from 1 to 5, with 1 meaning “Never” and 5 meaning “Always.”

CategoryAssessment QuestionScore (1-5)
Flexibility in ProcessesDo your QA and development processes allow frequent changes?
Are test cases and schedules adaptable as features change?
Early Involvement of QA (Shift-Left Testing)Is QA involved from the beginning, refining user stories?
Are test cases planned in parallel with feature development?
Collaboration Between QA and DevelopersDo QA and developers work closely together during sprints?
Is there ongoing communication between QA and developers?
Continuous Feedback LoopsAre feedback loops frequent during the sprint?
Is feedback (and bugs) from the testing team assessed and addressed quickly?
Automation in TestingDo you generate automation for the stories being developed during the sprint (vs automated on later sprints)?
Are automated tests run for each significant code change?
Test Case Reusability and MaintenanceIs your test library modular and easy to maintain?
Are automated tests regularly updated?
Defect Management and PrioritizationAre defects prioritized and resolved within the sprint?
Are there clear criteria for classifying defects?
Continuous Improvement and RetrospectivesAre QA processes included in sprint retrospectives?
Are metrics used to drive continuous improvement?

After completing the self-assessment, review your total score to understand where your team stands in Agile maturity:

  • 35 – 40: Your QA team is highly Agile, effectively embracing flexibility, collaboration, and continuous improvement.
  • 25 – 34: You’re heading in the right direction, but there’s room to improve. Identify lower-scoring areas and apply the strategies in this guide to strengthen your Agile approach.
  • 15 – 24: While some Agile practices are in place, there are noticeable gaps in your QA processes. It’s time to rethink your approach to collaboration, feedback loops, and integrating testing throughout the sprint.
  • Below 15: Your team may be Agile in name only. Consider revisiting Agile fundamentals and restructuring your QA processes to align with core Agile principles.

3 Key Strategies to Maximize QA Efficiency in Agile

  1. Shift-Left Testing

A core Agile principle is embedding QA early in the development process. However, many teams still follow outdated habits, treating testing as an afterthought rather than an integral part of the sprint. This delay often results in defects being discovered late, leading to costly rework, missed deadlines, and misaligned expectations.

Best Practice: Establish a continuous feedback loop between product owners, developers, and QA from the initial requirement discussions. Ensuring clear acceptance criteria and well-defined testable requirements helps prevent last-minute surprises.

Practical Tip 1: Use tools like mind maps or flow diagrams to visualize user journeys, dependencies, and potential risks during requirement gathering. This helps teams proactively identify edge cases and improve test coverage.

Practical Tip 2: Implement Behavior-Driven Development (BDD) to foster collaboration between QA, developers, and product owners. Writing test scenarios in plain language ensures shared understanding and helps translate requirements directly into test cases.

  1. Defect Management & Resolution

In Agile, defects should be handled as they arise, not postponed to future sprints. Without a structured defect management process, teams risk becoming overwhelmed, delaying essential fixes, or failing to address critical issues in time. Effective defect management is all about prioritization. Not every defect requires immediate attention, so it’s important to classify and address issues based on their impact.

Best Practice: Hold regular triage meetings to review, prioritize, and assign defects. This process ensures that the most critical issues are addressed first while maintaining transparency around defect resolution. A well-defined triage system keeps teams focused on resolving blockers before handling lower-priority fixes.

Practical Tip: Use a defect-tracking tool to maintain full visibility into defect status and ownership. Set up automated notifications for high-priority issues to ensure they are addressed immediately and don’t get lost in the backlog.

  1. Post-Sprint Retrospectives

A sprint’s conclusion is a great chance to reflect, improve, and refine processes. Retrospectives play a key role in Agile, offering teams the opportunity to analyze what worked well, what didn’t, and how to enhance efficiency in future sprints. QA is often overlooked in retrospectives, with discussions focusing primarily on development progress and sprint goals. But reviewing testing metrics such as defect resolution times, test coverage, and testing bottlenecks, can provide valuable insights into areas that need improvement.

Best Practice: Make testing a core part of sprint retrospectives. Review key QA metrics, including defect trends, risk assessment, and testing efficiency. Encourage all team members to share their insights on refining QA processes for the next sprint.

Practical Tip: Track and analyze sprint metrics to drive improvements. If defect resolution times were longer than expected, identify the root cause and adjust workflows. If test coverage was insufficient, explore ways to improve automation or optimize manual testing to focus on critical areas.

Take Your Agile QA to the Next Level

In this article, we outlined some key Agile strategies like Shift-Left Testing, Defect Management and Resolution, and Post-Sprint Retrospectives, which are essential for QA teams looking to optimize efficiency and improve collaboration. But these are just a few ways..

In the complete ebook, “Are You Really Agile? A Practical Guide for QA Teams,” you’ll find even more actionable insights and strategies designed to maximize QA impact in an Agile environment. From test execution to reporting, this guide is filled with practical recommendations to help you align your QA processes with true Agile principles.

Author

PractiTest exhibitors at EuroSTAR

PractiTest

PractiTest is an end-to-end SaaS test management platform that centralizes all your QA work, processes, teams, and tools into one platform to bridge silos, unify communication, and enable one source of truth across your organization.

With PractiTest you can make informed data-driven decisions based on end-to-end visibility provided by customizable reports, real-time dashboards, and dynamic filter views. Improve team productivity; reuse testing elements to eliminate repetitive tasks, plan work based on AI-generated insights, and enable your team to focus on what really matters.

PractiTest helps you align your testing operation with business goals, and deliver better products faster.



PractiTest are Exhibitors in this years’ EuroSTAR Conference EXPO. Join us in Edinburgh 3-6 June 2025.

Filed Under: Agile, EuroSTAR Expo Tagged With: EuroSTAR Conference, Expo

Software Testing In Regulated Industries

February 27, 2024 by Lauren Payne

In today’s landscape of digital adoption and the rapid growth of software technologies, many domains leveraging technology are within regulated industries. However, with the introduction of more technology comes the need for more software—and more software testing. This article will touch on the unique attributes, challenges, and considerations of software testing within these regulated domains.

Defining “regulated” industries

While many industries have specific guidelines and domain nuances, we will refer to “regulated” industries as those that are governed by overarching regulatory compliance standards or laws. 

These governance standards in most cases impact the depth, agility, and overall Software Development Lifecycle (SDLC) on how these standards are developed into requirements and then validated.

Below is a sampling of some of these domains:

  • Healthcare
  • Manufacturing
  • Banking/Finance
  • Energy
  • Telecommunications
  • Transportation
  • Agriculture
  • Life sciences 

Unique requirements

Common characteristics that teams will likely encounter when analyzing the software quality/testing requirements in these environments include:

  • Implementation of data privacy restriction laws (like HIPAA)
  • Detailed audit history/logging of detailed system actions
  • Disaster recovery and overall data retention (like HITRUST)
  • High standards for traceability and auditing “readiness”
  • Government compliance and/or oversight (like the Food and Drug Administration / FDA)

These common regulatory requirements are critical for planning and executing testing and establishing a quality of recording artifacts essential to supporting auditing and traceability.

Testing considerations & planning

Many testers and their teams are now being proactive in using paradigms such as shift-left to get early engagement during the SDLC. As part of early requirements planning through development and testing, specialized considerations should be taken within these regulated industries.

Requirements & traceability

  • The use of a centralized test repository for both manual and automation test results is critical
  • Tests and requirements should be tightly coupled and documented
  • Product owners and stakeholders should be engaged in user acceptance testing and demos to ensure compliance
  • Test management platforms should be fully integrated with a requirement tracking  platform, such as Jira

Image: The TestRail Jira integration is compatible with compliance regulations and flexible enough to integrate with any workflow, achieving a balance between functionality and integration.

Once teams have solidified a process for defining and managing requirements and traceability, it becomes imperative to ensure that the quality of test records is not only accessible but also restricted to those who require it. 

This controlled access is crucial, particularly in auditing situations, where the accuracy and reliability of test records may play a critical role. This approach for access controls is commonly referred to as the “least privilege” principle.

Image: With TestRail Enterprise role-based access controls, you can delegate access and administration privileges on a project-by-project basis

Test record access controls

  • Limit test management record access to the minimum required for team members
  • Ensure only current active team members have test record access
  • Implement a culture of peer reviews and approval to promote quality and accurate tests

Image: TestRail Enterprise teams can implement a test case approval process that ensures test cases meet organizational standards.

As test cases and test runs are created manually or using test automation integrations like the TestRail CLI, it is important to maintain persistent audit logging of these activities. Within regulated industries, audit requirements and “sampling” may require investigation of the history and completeness of a given test that was created and executed against a requirement.

Image: TestRail Enterprise’s audit logging system helps administrators track changes across the various entities within their TestRail instance. With audit logging enabled administrators can track every entity in their installation.

Audit history

It’s important to maintain a log that allows viewing of historical data on test case creation and execution. This supports audit readiness for requirements validation traceability.

Lastly, as teams focus on the development, testing, and delivery of software, we have to be mindful of disaster recovery and data retention of the artifacts we create. 

In the same thought process as disaster recovery of a given system under test, the quality of records for testing and release must persist to support compliance requirements and audits. Although centralized test management platforms with integrated restore capabilities are preferred, various tools and processes can be used to achieve this.

Image: TestRail Enterprise’s configurable backup and restore administration features enable administrators to specify a preferred backup time window, see when the last backup was completed, and restore the last backup taken.

Self-assessments & internal auditing

For all teams that are iterating on engineering, testing, and overall SDLC improvements, it’s important to dedicate time to perform self-assessments. 

Self-assessments in the context of software testing and quality in regulated environments can be a highly effective tool for identifying process gaps and shortcomings. 

Self-assessment/audit evaluation criteria

Examples of critical areas to include in your self-assessments or audit readiness exercises include:

  • Having full traceability via linkage of all tests to the corresponding requirements​ artifact (such as a Jira issue or defect)
  • Tests that have been planned and executed are linked to a given release event/designation
  • Failed tests for a given release or sprint are linked to a defect artifact (such as a Jira defect)

Once a self-assessment or internal audit is performed, ensure that the team collects actionable information such as improvements to requirements traceability or more detailed disaster recovery documentation that can be used to improve the overall SDLC with a focus on core compliance best practices and standards.

Bottom line

Additional considerations and requirements must be made across the SDLC when operating teams within regulated industries. The early inclusion of these additional requirements with all team members is critical to ensuring compliance and overall success in audits and other regulatory assessments. 

Key Takeaways

  • Focus on traceability, ensure linkage of tests to requirements
  • More focus on security and access controls testing
  • Centralize all test artifacts in a repository with backups/data retention
  • Plan and execute disaster recovery validation

Watch the Testing In Regulated Industries webinar on the TestRail Youtube channel for more information on the unique challenges and characteristics of software testing in regulated industries!

Author


Chris Faraglia
, Solution Architect and testing advocate for TestRail.

Chris has 15+ years of enterprise software development, integration and testing experience spanning domains of nuclear power generation and healthcare IT. His specific areas of interests include but are not limited to test management/quality assurance within regulated industries, test data management and automation integrations.

TestRail is an EXPO Gold Sponsor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Gold, Software Testing, Sponsor, Uncategorized Tagged With: 2024, EuroSTAR Conference, Expo

10 EuroSTAR tutorials to elevate your testing

March 24, 2022 by Fiona Nic Dhonnacha

This year’s EuroSTAR tutorials feature 10 in-depth sessions from expert speakers. This is your chance to learn new skills through interactive group work, practical examples, and hands-on exercises. Our tutorial trainers will share their triumphs – and mistakes – so that you learn from real experience.

Want to learn more about how biases affect your testing? Improve your critical thinking skills? Learn new concepts and testing tools? This is where it all happens.

Full-day tutorials

1. Shaping testing – A simulation in scrum | Fran O’Hara

It’s reported that approximately 80% of agile organisations are using Scrum and its hybrids. However, the majority of organisations are still at the earlier stages in their evolution. One of the challenges they face is quality and testing. In this simulation at Fran’s tutorial, you’ll work as part of a Scrum team focusing solely on the test and quality-related aspects.

You’ll get insights, knowledge, and skills to help shape your team’s approach to quality and testing throughout the agile lifecycle. You’ll also be faced with a series of challenges/scenarios often faced by testers during the sprint. You’ll walk away with a toolbox of ways to solve the typical problems that impact on test and quality in Scrum.

book tickets

2. Developing critical thinking skills for testers | Andrew Brown

Critical thinking skills are essential to testing – but rarely taught. We’re also vulnerable to cognitive biases and thinking traps when we test – that can catch out even the most seasoned tester. One of the most common thinking traps we face is that we tend to think in ways that confirm our existing beliefs, rather than challenging those beliefs.

This tutorial will reveal your vulnerability to this bias, then show you how to mitigate it. Throughout you’ll learn exercises to explore your biases, and how reasoning does not use logic – or at least not rational logic. Rather, we use logic based upon social permissions and cheater detection. You will use this to explore how to write better tests by rephrasing testing requirements in terms of social logic. At the end, you’ll have ways to improve your testing by enhancing your critical thinking skills.

view tutorial

3. Test design with data combination testing and classification trees | Rik Marselis

Test case design is one of the core competences of the quality engineering & testing profession. If you want to properly shape your testing, which test design technique(s) do you use? And is it effective and efficient? What coverage can you achieve? Rik explores the Data Combination Test design technique, which uses Classification Trees.

This technique can be combined with 3 different coverage types, so you have a great way to align with the risk level of the application. Join Rik’s tutorial on DCT & CT and start applying his techniques, the very next day you return to work.

see tutorial

A group of attendees at EuroSTAR Conference working together at Rik Marselis's tutorial

4. Automate BDD scenarios with SpecFlow | Gáspar Nagy

SpecFlow is the official Cucumber implementation for .NET, and if you want to speed up automating BDD scenarios with SpecFlow, this workshop is for you. Gáspar will give you a very brief introduction to BDD/ATDD, the most important characteristics of good BDD scenarios, before getting into coding in order to learn about the most important features of SpecFlow.

This is your opportunity to learn about the BDD automation workflow, and see how the test-first approach can help you to get quick feedback about the quality. You’ll also see on what levels you can automate the application with BDD, and how to make a good mix of them to get a sustainable testing strategy.

get tickets

Half-day tutorials

5. Questions, questions | James Lyndsay

At this interactive workshop, you’ll learn techniques to obtain more useful information quickly, ask the right questions at the right time, and learn where to go next. James will show you how to recognise types of questions, the contexts in which you might use them, and the people you might ask.

The tutorial also looks at ways you can use questions poorly: loaded and leading questions, questions designed to trick or trip, questions which go over the same ground, and other anti-patterns. You’ll know how to ask more purposeful and focused questions, and see the patterns, gaps and opportunities in your own questions.

view tutorial

6. From business workflows to automated tests | Anne Kramer

With the deployment of agile development practices, QA & testing teams are challenged by the acceleration of production releases, and the imperative of test automation. These challenges make test relevance, and the alignment of these tests with business needs, even more crucial. After all, what would be the economic justification of investing in automation and maintenance of tests that do not properly reflect the business needs?

This tutorial introduces you a visual TDD approach (Acceptance Test Driven Development). In this approach, the test requirements are expressed in graphical workflows. You’ll build a graphical test workflow for a simple functional scope proposed as a practical exercise. Learn how to express business needs and test requirements in the form of graphical workflows and appreciate the collaborative contribution of this visual representation.

see tickets

Speaker teaching a tutorial class at the 2019 EuroSTAR conference

7.  Exploring context driven testing & exploratory testing | Nancy Kelln

More testers are gravitating towards using exploratory testing and context driven testing techniques in their organizations. However, as testers start to embrace these testing methodologies, they are uncovering questions in their implementation. In this half-day workshop, you’ll explore the various aspects of testing including test planning, test design, test execution, and test reporting, from the exploratory testing mindset.

You’ll also learn how to prepare your organization for the shift from more traditional methods to exploratory testing methods. You’ll leave with an understanding of how to implement exploratory testing concepts through all the phases of test planning, design, execution, and reporting.

view tutorial

8. Automatic accessibility testing for all | Cecilie Haugstvedt

In this workshop you’ll learn how to set up and write your own automatic accessibility tests using Axe and Cypress. You’ll write both unit tests and integration tests, and will look at what to test at each level. Cecilie will also cover some of the most common accessibility errors that can be discovered automatically, and learn how to fix them.

This workshop will be done in ensemble style – mob testing with small groups where the whole group works on the same thing, at the same time, in the same space, and at the same computer. You’ll leave understanding how to implement exploratory testing concepts through all the phases of test planning, design, execution and reporting.

get tickets

9. Simplicity: distilling and refining test communication | Rick Tracy

This workshop, which uses the idea of simplifying how we communicate among ourselves and others, aims to distil the core results of testing into a much easier to understand message. Often we end up using too much industry jargon and test language, and this ends up diluting our message and the value of our communication. Once we’ve distilled the message to someone direct and valuable to all, we can much better target specific audiences with a refined message.

You’ll leave the workshop with a good understanding of why your valuable messages don’t always come across, what you can do about it, and how you can apply these skills regularly to your everyday work. You’ll explore several techniques to make a clear overall message, and to make refined valuable messages, no matter who the stakeholder is.

see tutorial

10. The good, the bad, and the biased | Emma Lilliestam & Hanna Schlander

How does the brain work? Why do biases exist, and what are the pros and cons of biases? In the workshop you’ll learn about two thinking systems in the brain, and four categories of biases. We are all biased. Biases affect our everyday lives, both at work and at home. By learning more about them we can expand our view, shape our minds, and improve our testing abilities!

You’ll get the chance to deep dive into biases in groups, and discuss a selection of our favorite biases, before presenting your findings to the group.

view tickets

Excited to start learning? It’s our first in-person event in 2 years – and it’s going to be a massive celebration of testing! Soak up knowledge from 70 testing experts, and connect with your peers at Europe’s best testing event. Get your ticket now – book by April 22nd and save 10% on individual tickets; up to 35% on group bundles.

Filed Under: EuroSTAR Conference Tagged With: EuroSTAR Conference

7 Automation Sessions to Improve your Test Automation Skills

October 21, 2021 by Ronan Healy

The EuroSTAR Huddle Deep Dive is just around the corner. Taking place from 1-4 November, this live event will take a deep dive into all aspects of test automation. If you are an automation engineer and want to improve your automation skills, the Deep Dive week is the event you should be attending. Let’s have a look at what is happening.

Big Data and Flaky Tests

Adam Sandman & Denis Markovtsev are at the cutting edge of test automation. They plan to showcase their novel use of Big data in test automation. This event will explain how big data was utilised by Adam and Denis to examine and reduce test flakiness. Part of a research project, this talk will showcase their approach of analysing 500 websites by downloading their DOM trees and performing data analysis to see how best practices developed in theory will work in practices with these sites. 

What is Cypress?

What is Cypress? That is a question many of you might have some answer to. Cypress is becoming a popular tool in automation but what does it do? Marie Drake will discus Cypress as a tool is and look at the importance of visual testing and how to integrate visual testing plugins to Cypress.

Making use of Low Code Automation

Paul Grossman has many years of experience in automation. He is a fan of low code automation and so makes a great person to discuss the uses of low code automation. If you have thought about applying low code tools this talk will fill you in on its uses. Paul will explain how you can utilise low code automation and showcase its use with some demo’s too.

Learning From Mistakes in Automation

What are the common mistakes made in test automation? Corina Pip is here to describe and share her advice on the common mistakes in test automation from her experience in the field. This will be a great talk to learn the common pitfalls in automation that we end up in. The Dos and Don’ts of Automation will offer some real world experiences of when automation turns out to be not as useful as you might have hoped!

API Testing

API have become a bigger part of our everyday life. This means a bigger requirement for testing API’s. Julia Pottinger will discuss API automation and in particular Scenarios to consider when doing API Automation. Learn about the world of API’s where to get started, and how to start the process of testing API’s.

Ask Me Anything

Dorothy Graham has over 30 years in the software testing industry much of that focusing on automation. In this live event she will reflect of the week of live events, where automation is going but more importantly will answer your questions on test automation.

Testing the Tests

The week concludes with a well renowned test automation expert: Bas Dijkstra. Bas will ask and answer the question: Who Tests the Tests? In this lively session he will share how you can make quality control for your automated tests part of your testing and development process. He will also introduce the technique of mutation testing and how you might use it for checking the quality of your automated tests.

So seven talks that can only excite you about the new skills and takeaways that you could learn from these sessions. Remember to sign up here.

Filed Under: Test Automation Tagged With: EuroSTAR Conference, Test Automation

Reminiscing EuroSTAR 2021: My Top 10

October 20, 2021 by Ronan Healy

A EuroSTAR Committee member looks back at the 2021 software testing conference.

It has been two weeks since the action packed conference has ended but I am still full of thoughts from the amazingly memorable event it has been this year. I have been associated with EuroSTAR for many years now as a reporter and a volunteer but this year was all the more special, with an opportunity to be a part of the programme committee. It was a huge honour and I enjoyed every moment of playing the role, working alongside some of the biggest names in the testing world Fran O’Hara, Janet Gregory, Derk-Jan De Grood and Szilard Szell. This has been a exhilarating experience to say the least! Right from going through the significant number of submissions, to making decisions on sessions for the three days it was incredibly fulfilling and enlightening to witness the programme come together. The collaboration with fellow committee members was brilliant as we went through an incredibly difficult decision of choosing sessions from submissions of very high quality! The programme always feels incredibly impressive but this year, there was a personal connection.  To witness the final form of it get unraveled to hundreds of participants is one of the most unforgettable experiences! This being a second online edition of the conference, there were a number of aspects to be considered but I must say none of that felt like an unsurmountable challenge, thanks to the unwavering support provided by the fantastic conference team. As the event got underway, there was so much energy and knowledge in the virtual environment that I felt transported into a world of unlimited inspiration! So much to learn and rejoice! As always, I want to list the top 10 things from this year’s conference for me. In no particularly order these aspects stood out for me this year: 

  1. Engage theme: I felt this year’s theme was a winner in attracting a huge variety of submissions. It was awe inspiring experience to see the various interpretations of it! I found it to have created a welcoming spirit amongst the conference participants as the theme could have multiple interpretations which helped spur creativity and imagination. Many congratulations to conference chair Fran to have come up with this motivational theme – it was a joy to work as a programme committee member on this theme 

  2. Tutorials – I always enjoy this format of the programme as you get to focus and absorb information on one specific topic. This year, once again I have to say the tutorial experience was an absolute delight, being a part of the award winning team of Anne Colder- Jantien van de Meer, who effortlessly delivered  a highly coordinated, superbly crafted session. A stellar line up of speakers including maestros Michael Bolton, Rik Marselis delivered these phenomenal sessions, giving the participants an opportunity to deep dive into topics of interest – a great feature of the conference! 

  3. New terminologies – I learnt a few excellent terms from this year as part of many conversations ‘Critical incident’ is the top one from this year which was shared by Testing Guru Dawn Hayes. As Dawn puts it, critical incident is an experience after which you never return to operating like before. To me, all of this year’s experience has been one such critical experience! Another term is ‘happiness engineer’ – I derived it from an amazing key note delivered by Michiel Boreel who  referred to testers as the guardians of digital happiness and in fact went on to suggest we should consider the above new title for testing professionals! These interesting terms have been duly added to my everyday vocabulary! 

  4. Implementable knowledge: All through the three days the sessions I attended delivered not just the theory behind an idea but gave an excellent focus on practical implementation of those approaches. Be it the concept of Building relationships by Lukasz Pietrucha, Automation pitfalls and possibilities by Sune Engsig, Quality and human factors by Andrew Brown, Dealing with conflict by Marielle Roozemond – I can list possibly every single session I attended which gave me new insight on how differently testers can approach real life challenges. As I am catching up on recordings from the event, my learning is continuing! 

  5. Riveting Keynotes: Each and every keynote session in this year’s conference was a gem, I went on being amazed at how fantastically these topics were presented by the skillful speakers. As one of the participants said “how many times can mind be blown in three days!”. I found all keynotes to be outstanding and to have delivered key messages that are highly relevant to our area of work. Rob Lambert, Aprajitha Mathur, Micheil Boreel,  Maaret Pyhäjärvi and Janet Gregory – a huge round of applause to have made incredibly impactful deliveries.

  6. Lightening talks: This has been my favourite format of talk, ever since I witnessed it back in 2018 with none other than Fran O’Hara on the stage! I was delighted to not only see this format again on EuroSTAR platform but also get the privilege of being a part of it! I was one of the eight speakers who delivered a brief 5 minute talk on a topic of their choice, which delivered great nuggets of thoughts for audience to consider. It was a high energy, fast and interesting session. I personally enjoyed preparing and delivering my short piece, felt honoured to have shared the virtual stage with amazing speakers like Dawn Hayes, Tariq King, Elise Carmichael, Rob Meaney, Sanne Visser,  Chris Armstrong and Raj Subrameyer. Thrilled to have been a part of this fantastic ensemble!

     

  7. Huddle sessions: This is an aspect of  the conference I hold close, as my predominant involvement as a reporter / volunteer has been in this space. As always, this was action packed this year with a variety of conversations and activities. Inspirational AMA sessions with Michael Bolton, Rob Lambert, Jyothi Bhatt and Sune Engsig, career advise from Raj Subrameyer, Ensemble testing by Maaret Pyhajaravi,  exploratory bug hunt with Marek Lof and so much more! The breaks from sessions had many engaging activities to relax and to network with fellow participants. The feedback sessions were great in hearing from the members of community. Always a hugely uplifting huddle of people! 

  8. Award winners: It was an absolute delight to learn of the most deserving winners of the award which was actually a top secret and not known till announced at the ceremony! Hearty congratulations Kari Kakkonen on the Software Testing Excellent award! I had the privilege of attending Kari’s session last year’s conference and I have since been in awe of his dedication, enthusiasm for testing, particularly about educating younger generation. The winner of best paper Adonis Celeste is again someone I have seen on EuroSTAR platform back in 2018. His white paper has deep insight and excellent pointers for tester role, a highly recommended read. We are fortunate as a community to have such brilliant thought leaders! 

  9. Conference team:  I have known this team for a few years now and I guess I have run out of superlatives to express how much I admire them! It was yet another extraordinary experience to see the event being brought together skillfully and coordinated to perfection. What was an additional aspect was how experienced the team has in running the conference and how deftly they apply it in real time. The process was smooth flowing in spite of how complex the three day long event can be in terms of logistics and planning. Their tireless efforts, commitment and professionalism is simply top notch. Truly a wow team! 

  10. Testing community spirit: This entry has stayed in my top 10 list ever since I started attending EuroSTAR conference. This is an aspect I am so proud of and so very much enjoy during this event. The enthusiasm and community spirit just shines through on every occasion. This year too the participants proved that the format is secondary and the spirit of being in company of fellow testing professionals is much more! The conversations during the talks and the engagement with Q&A sessions from this year’s talks was outstanding! It was really nice to get to meet many participants from near and far, covering a big spectrum of career experience. The celebration of diversity in this community is exemplary and I feel really lucky to be a part of it!

As this year’s conference comes to an end, the new one is already looking hugely exciting! To be held in the beautiful city of Copenhagen, the conference is going to be first in-person event after 2 years of being virtual. It will be headed by none other than testing Guru Graham Freeburn,  accompanied by the team of experts Sue Atkins, Morten Hougaard, Bart Knaack, and Tone Molyneux. I am sure this fabulous team will come up with a brilliant programme that we all look forward to! 

In conclusion, I must mention, this year has been particularly emotional for me as EuroSTAR has been on my mind for the last 8 months. I am immensely happy for the conference Chair Fran, fellow committee members and the conference team on the success of this year’s event. My hearty congratulations to everyone involved  – the speakers, the audience, the sponsors – what a festive experience that was from start to finish! I also feel a bit philosophical about the year that has been to us in the larger sense. We have all been through strange times and this event made reflect on how we have managed to stay connected as a community in spite of multiple challenges. Human resilience and adaptability will hopefully see us through as society as we transition back to the ‘original normal’. Here’s to the new sprouts of hope that have started to appear and here’s to good health, well being for everyone! 

About the Author

Sowmya Ramesh EuroSTAR Committee
Sowmya Ramesh

Sowmya Ramesh is a testing professional with over 18 years of IT industry experience currently working as a consultant with 2i Testing. She has a deep interest in the area of non-functional testing, in particular, accessibility testing which she has promoted for a number of years in the testing community. Sowmya writes blogs on topics of professional interest and has been a speaker at events for MOT Edinburgh, DevTest Summit. Sowmya was awarded a reporter role on Eurostar Conferences in 2017 and 2018 and in 2021 was part of the EuroSTAR Conference Committee.

Filed Under: EuroSTAR Conference Tagged With: EuroSTAR Conference

EuroSTAR 2021 Day 2 – Sketchnotes

September 30, 2021 by Ronan Healy

Woohoo, finally it’s that time of the year again – the EuroSTAR Conference is on. This year it took place as an online edition which gave people from all around the world the opportunity to take part in this amazing conference without needing to travel during the pandemic times.

Although it is an online edition, there was plenty of space to meet other attendees. You could do a speed meet sessions and talk to someone randomly for 3 minutes. You could join a lean coffee session and discuss interesting topics. You could get solutions to your problems from the Test Clinic. Or you could check out the demos of the sponsors and stroll through the Expo area.

The Programme Committee around the Programme Chair Fran O’Hara made an amazing job when choosing the speakers for this year’s edition and you could listen to talks from different areas of testing, speakers with different backgrounds and from different cultures. Let me summarize my first day at the EuroSTAR Conference 2021 by using my sketchnotes.

Moving to frequent releases. The 10 Communication Principles that support rapid change.

Rob Lambert
The first keynote of the day was held by Rob Lambert. Rob started by explaining the 5 step thinking model. First, you paint a picture of the bright future (like a vision). Then you lean into the problem and ask if the team is the right one to get it done. Afterwards it’s all about routines, habits, disciplines and processes, and finally there is a lot of learning.

He then talked about the 10 (+1) Communication Principles to support rapid change. These are:

  1. Enthusiasm
  2. Purpose, audience, context
  3. Communication is something the listener does
  4. Use stories
  5. Don’t waste people’s time
  6. Practice is preparation
  7. People remember how you make them feel
  8. Non-verbal is a superpower
  9. People resonate with those who sound like them
  10. You can hack your body
  11. Listening is the greatest compliment
Moving to frequent releases - the 10 Communication Principles that support rapid change

Quality is not about testing … it’s about value!

Gitte Ottosen
Gitte thinks that is not that easy to define what quality is, as it doesn’t mean the same to everybody. It is the value to some person, at some time, who matters. So you have to find out who the people that matter are.

More important, you have to think about what value means to you and get a common understanding throughout the whole team. You can do so by using the VOICE Model: Value, Objectives, Indicators, Confidence, Experience. As value is time dependent, you have to think about what you want to get out of the things you do right now – this will change over time.
Quality is not about testing - its about value

5 myths and anti-patterns to refactor out with continuous performance

Paul Bruce
Nowadays, organizations want to deliver things quickly. They run into a more continuous future where they do smaller releases more often and share knowledge along the way. This only works if continuous performance feedback is given.

Paul used 5 myths/anti-patterns to explain how to get to having a continuous performance mindset:

  1. Prohibite Ubiquity
  2. Expedited Gridlock
  3. Mandated Ignorance
  4. Escape Philosophy
  5. Predictable Unreliability

Have a look at my sketchnote to learn more about the mantra and possible actions for each anti-pattern.
5 myths and anti-patterns to refactor out with continuous performance

CSI testing – investigate like a pro

Adam Matlacz & Elzbieta Sikora
Adam and Elzbieta think that exploratory testing brings many opportunities for testers, especially when the sessions are held with people with different roles. While being a tester, you can behave like a detective when you are on a bug hunt. That’s how they came up with CSI Testing.

The CSI Principles are: Concentration, System thinking and Impartiality (+ break the rules). Focus on a goal, use a holistic approach and treat all fair without prejudices. A CSI Tester should build skills around different types of thinking, tools and gadgets and approaches and techniques.

The CSI Procedure would look like that:

  1. Approach the scene: Find out what you deal with
  2. Start investigation: Narrow down
  3. Release crime scene: Team work
  4. Conduct the trial: Confront suspects, call witnesses, judgement
  5. Debriefing: Retrospect
CSI testing - investigate like a pro

Lightning Strikes

Lightning Strikes are short talks of 5 minutes in which 2 slides are allowed. This year there were 8 speakers who were giving a lightning talk.

Lightning Strikes 1

Tariq King, Sanne Visser, Rob Meaney & Elise Carmichael
Tariq thinks that the world is full of bad software and by filling AI & ML systems with bad data and biases, the problem is being made even worse. By testing your software and using a holisitic approach, you can unlock a revolution and make a difference.

The essence of Sanne’s talk is, that you should always remember that there are different ways to get to where you want to get and that you should take a look around and think outside the box to not get stuck on the way things should be.

Rob shared two philosophies with us: Seek problems, solve problems share lessons, which is basically about sharing what you’ve learned with everyone and working in a holistic matter to enhance quality. The other one is: Fewer, smaller things together – which means fewer as in the variety/volume of work, smaller as in slicing things down more and together as in cross-functional teams that start and finish together.

Elise thinks that next to maintaining, analyzing and creating tests there is a huge portion of test debt, a backlog of work not done. She thinks that flipping the test pyramid upside-down helps you to refocus on the customer’s experience and therefore also reducing some of this debt.
Lightning Strikes 1 EuroStarConf

Lightning Strikes 2

Raj Subrameyer, Dawn Haynes, Chris Armstrong & Sowmya Ramesh
Raj compared pre- and during covid work to each other and found that from having onsite interviews, working from home as a luxury, limited virtual meetings and mandatory travel it all transfered to no onsite interviews, work from home as a necessity, virtual meetings as a normality and travel not existing any more. This leads to a huge change in the future of work in which a social media footprint, personal branding, remote working, going the extra mile and re-tooling your skillset become crucial.

Dawn’s approach to finding and hiring awesome testers is usins these attributes of a CRACK tester:

  • Curious
  • Resourceful
  • Adaptable
  • Creative
  • Knightly

In short, Chris’ talk is about not overlooking any rules when doing test automation. A small mistake as letting someone and not the whole team decide on which tool to use might turn into a huge problem.

Experimentation is pivotal to success. Sowmya encourages us to be open to changes, so that we don’t lose any opportunities. But while experimenting, you should not forget to do extensive research, monitor and measure, improvise as required, document the lessons learned and implement with confidence.
Lightning Strikes 2 EuroStarConf

The seeds of toxicity we’ve been trained to overlook at work

Raj Subrameyer
Raj has been dealing a lot with stereotyping and racism himself – mostly microagressions (actions/words that can be offending). He discovered that those often appear at work places and should definitely be fought. He has 5 proposals on how to curb microagressions:

  • Diversity & inclusion training
  • Having open conversations
  • Own your mistakes
  • Anonymous help hotline
  • No tolerance policy
The seeds of toxicity we ve been trained to overlook at work

Continuous performance testing in DevOps

Lee Barnes
Traditional performance testing, which often comes too late and takes too long, is dead. Continuous performance, which means evaluation at each stage and more frequent feedback, is uprising. But you have to think about performance factors early on in your development process. Incorporate the requirements as contraints or/and as acceptance criteria for existing user stories.

Lee advises us to start small and expand to move from a centralized to an integrated performance team. Gather feedback and continuously improve.

Testing and monitoring in production is important, but don’t forget to consider to:

  • eliminate requests to 3rd parties
  • ensure system “knows” that it’s being tested
  • identify an ideal test window
  • coordinate with infrastructure providers
  • solicit broad IT input
Continuous performance testing in DevOps

The journey of testing software for DNA analysis

Aprajita Mathur
The last keynote of the day was held by Aprajita Mathur. She was talking about testing software for DNY analysis. The standard workflow for analysis of genome sequence data is:

  1. Alignment
  2. Reference Genome
  3. A) Somatic or B) Germline variant calling
  4. Variant filtering & annotation
  5. Data visualization & reporting

Machine learning applications help testing and can be supervised, unsupervised or predictive. While testing – also called Bioinformatics Analysis Pipelines – keep in mind:

  • Statistical models are used
  • The models are trained on data sets
  • The model is as good as the data or worse
  • You aren’t testing the exact output, but expected behaviour
  • You have to test in different situations
  • There are lot of changes, which lead to complexity, but also exploration and fun

Between testing software for DNA analysis and “normal” software testing there are a lot of common grounds. But to DNA analysis, there are also some challenges as Gold Standards, domain knowledge, population genomics and that nature always has its way.
The journey of testing software for DNA analysis

As you can see, on the first day of EuroSTAR 2021 there were a lot of different topics already. From communication principles over performance testing and racism at work to testing software for DNA analysis – that’s what I call a huge variety. And there are so many talks that I haven’t attended and might re-watch after the conference.

About The Author

Profile Photo

Katja Budnikov is a software tester and sketch noter from Northern Germany. Katja is passionate about software testing and sketch noting! She loves attending events like EuroSTAR and sharing her experience and learnings with others on her blog Katjasays.com. Katja first started sketchnoting in 2016. First analogue with pen and paper and now digitally with an iPad and Apple Pencil.

In her work life Katja started out in online marketing, then specialized in search engine optimisation and is now a quality assurance specialist in both manual and automated software testing. Away from work Katja loves photography, especially taking photos of nature, including many of her dog Auri, a young Australian Shepherd, who is super cute and fun to take photos of. She loves to spend time with her dog and partner, going out for walks, traveling and eating cake at a nearby coffee shop with a beautiful garden.

Filed Under: EuroSTAR Conference Tagged With: EuroSTAR Conference

  • Page 1
  • Page 2
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us