• Skip to main content
Year End Offer: Save 30% + Teams Save Up to an Extra 25%!

EuroSTAR Conference

Europe's Largest Quality Engineering Conference

  • Programme
    • Programme Committee
    • 2025 Programme
    • 2025 Speakers
    • Community Hub
    • Awards
    • Social Events
    • Volunteer
  • Attend
    • Location
    • Highlights
    • Get Approval
    • Why Attend
    • Bring your Team
    • Testimonials
    • 2025 Photos
  • Sponsor
    • Sponsor Opportunities
    • Sponsor Testimonials
  • About
    • About Us
    • Our Timeline
    • FAQ
    • Blog
    • Organisations
    • Contact Us
  • Book Now

Uncategorized

Myth vs. Reality: 10 AI Use Cases in Test Automation Today

March 5, 2024 by Lauren Payne

For decades, the sci-fi dream of simply speaking to your device and having it perform tasks for you seemed far-fetched. In the realm of test automation and quality assurance, this dream is inching closer to reality. With the evolution of generative AI, we’re prompted to explore what’s truly feasible. Embedding AI into your quality engineering processes becomes imperative as IT infrastructures become increasingly complex and integrated, spanning multiple applications across business processes. AI can help alleviate the daunting tasks of knowing what to test, how to test it, creating relevant tests, and deciding what type of testing to conduct, boosting productivity and business efficiency.

But what’s fact and what’s fiction? The rapid evolution of AI makes it hard to predict its capabilities accurately. Nevertheless, we’ve investigated the top ten key AI use cases in test automation, distinguishing between today’s realities and tomorrow’s aspirations.

1. Automatic Test Case Generation

Reality: AI can generate test cases by analyzing user stories along with requirements, code, and design documents, including application data and user interactions. For instance, large language models (LLMs) can interpret and analyze textual requirements to extract key information and identify potential test scenarios. This can be used with static and dynamic code analysis to identify areas in the code that present potential vulnerabilities requiring thorough testing. Integrating both requirement and code analysis can help generate potential manual test cases that cover a broad set of functionalities in the application.

Myth: But here’s the caveat: many tools on the market that enable automated test case generation create manual tests. They are not automated. To create fully automated, executable test cases is a use case that remains a myth and still requires further proof. Additionally, incomplete, ambiguous, or inconsistent requirements may not always generate the right set of tests, and this requires further development. Test cases may not always cover edge cases or highly complex scenarios, nor are they able to cover completely new applications. Analysing application and user interaction data may not always be possible. As a result, human testers will always be required to check the completeness and accuracy of the test suites to consider all possible scenarios.

2. Autonomous Testing

Reality: Autonomous testing automates the automation. Say what? Imagine inputting a prompt into an AI model like “test that a person below the age of 18 is not eligible for insurance.” The AI would then navigate the entire application, locate all relevant elements, enter the correct data, and test the scenario for you. This represents a completely hands-off approach, akin to Forrester’s level 5 autonomous state.

Myth: But are we there yet? Not quite, though remarkable technologies are bridging the gap. The limitation of Large Language Models (LLMs) is their focus on text comprehension, often struggling with application interaction. For those following the latest in AI, Rabbit has released a new AI mobile phone named r1 that uses Large Action Models (LAMs). LAMs are designed to close this interaction gap. In the realm of test automation, we’re not fully there. Is it all just hype? It’s hard to say definitively, but the potential of these hybrid LAM approaches, which execute actions more in tune with human intent, certainly hints at a promising future.

3. Automated Test Case Design

Reality: AI is revolutionising test case design by introducing sophisticated methods to optimise testing processes. AI algorithms can identify and prioritise test cases that cover the most significant risks. By analyzing application data and user interactions, the AI can determine which areas are more prone to defects or have higher business impact. AI can also identify key business scenarios by analysing usage patterns and business logic to auto-generate test cases that are more aligned with real-world user behaviors and cover critical business functionalities. Additionally, AI tools can assign weights to different test scenarios based on their frequency of use and importance. This helps in creating a balanced test suite that ensures the most crucial aspects of the application are thoroughly tested.

Myth: However, AI cannot yet fully automate the decision-making process in test suite optimisation without human oversight. The complexity of certain test scenarios still requires human judgment. Moreover, AI algorithms are unable to auto-generate test case designs for new applications, especially those with highly integrated end-to-end flows that span across multiple applications. This capability remains underdeveloped and, for now, is unrealised.

4. Testing AI Itself

Reality: As we increasingly embed AI capabilities into products, the question evolves from “how to test AI?” to “how to test AI, gen AI, and applications infused with both?” AI introduces a myriad of challenges, including trust issues stemming from potential problems like hallucinations, factuality issues, and explainability concerns. Gen AI, being a non-deterministic system, produces different and unpredictable outputs. Untested AI capabilities and AI-infused applications can lead to multiple issues, such as biased systems with discriminatory outputs, failure to identify high-risk elements, erroneous test data and design, misguided analytics, and more.

The extent of these challenges is evident. In 2022, there were 110 AI-related legal cases in the US, according to the AI Index Report 2023. The number of AI incidents and controversies has increased 26-fold since 2021. Moreover, only 20% of companies have risk policies in place for Gen AI use, as per McKinsey research in 2023.

Myth: Testing scaled AI systems, particularly Gen AI systems, is unexplored territory. Are we there yet? While various approaches and methodologies exist for testing more traditional neural network systems, we still lack comprehensive tools for testing Gen AI systems effectively.

AI Realities in Test Automation Today

The use cases that follow are already fully achievable with current test automation technologies.

5. Risk AI

It’s a significant challenge for testers today to manage hundreds or thousands of test cases without clear priorities in an Agile environment. When applications change, it raises critical questions: Where does the risk lie? What should we test or prioritize based on these changes? Fortunately, risk AI, also known as smart impact analysis, offers a solution. It inspects changes in the application or its landscape, including custom code, integration, and security. This process identifies the most at-risk elements where testing should be focused. Employing risk AI leads to substantial efficiency gains in testing. It narrows the testing scope, saving considerable time and costs, all while significantly reducing the risk associated with software releases.

6. Self-Healing

By identifying changes in elements at both the code and UI layer, AI-powered tools can auto-heal broken tests after each execution. This allows teams to stabilize test automation while reducing time and costs on maintenance. Want to learn more about how Tricentis Tosca supports self-healing for Oracle Fusion and Salesforce Lightning and Classic? Watch this webinar.

7. Mobile AI

Through convolutional neural networks, mobile AI technology can help testers understand and analyze mobile interfaces to detect issues in audio, video, image quality, and object steering. This capability helps provide AI-powered analytics on performance and user experience with trend analysis across different devices and locations, helping to detect mobile errors rapidly in real time. Tricentis Device Cloud offers a mobile AI engine that can help you speed up mobile delivery. Learn more here.

8. Visual Testing

Visual testing helps to find cosmetic bugs in your applications that could negatively impact the user experience. The AI works to validate the size, position, and color scheme of visual elements by comparing a baseline screenshot of an application against a future execution. If a visual error is detected, testers can reject or accept the change. This helps improve the user experience of an app by detecting visual bugs that otherwise cannot be discovered by functional testing tools that query the DOM.

9. Test Data Generation

Test data generation using AI involves creating synthetic data that can be used for software testing. By using machine learning and natural language processing, you can produce dynamic, secure, and adaptable data that closely mimics real-world scenarios. AI achieves this by learning patterns and characteristics from actual data and then generating new, non-sensitive data that maintains the statistical properties and structure of the original dataset, ensuring that it’s realistic and useful for testing purposes.

10. Test Suite Optimisation

AI algorithms can analyze historical test data to identify flaky tests, unused tests, redundant or ineffective tests, tests not linked to requirements, or untested requirements. Based on this analysis, you can easily identify weak spots or areas for optimization in your test case portfolio. This helps streamline your test suite for efficiency and coverage, while ensuring that the most relevant and high-impact tests are executed, reducing testing time and resources.

What about AI’s role in performance testing, accessibility testing, end-to-end testing, service virtualization, API testing, unit testing, and compatibility testing, among others? We’ve only just scraped the surface and begun to explore the extensive range of use cases and capabilities that AI potentially offers today. Looking ahead, AI’s role is set to expand even further, significantly boosting QA productivity in the future.

As AI continues to evolve, offering tremendous benefits in efficiency, coverage, and accuracy, it’s important to stay cognizant of its current limitations. AI does not yet replace the need for skilled human testers, particularly in complex or nuanced scenarios. AI still lacks the human understanding needed to ensure full software quality. Developing true enterprise end-to-end testing spanning multiple applications across web, desktop, mobile, SAP, Salesforce, and more requires a great deal of human thinking and human ingenuity, including the capability to detect errors. The future of test automation lies in a balanced collaboration between AI-driven technologies and human expertise.

Want to discover more about Tricentis AI solutions and how they can cater to your unique use cases? Explore our innovative offerings.

Tricentis offers next-generation AI test automation tools to help accelerate your app modernisation, enhance productivity, and drive your business forward with greater efficiency and superior quality.

Author

Simona Domazetoska – Senior Product Marketing Manager, Tricentis

Tricentis is an EXPO Gold Sponsor at EuroSTAR 2024, join us in Stockholm

Filed Under: EuroSTAR Conference, Gold, Sponsor, Test Automation, Uncategorized Tagged With: 2024, Expo, software testing tools, Test Automation

Uncover Stockholm: 10 Top Experiences

February 28, 2024 by Fiona Nic Dhonnacha

As you prepare to delve into 4 great days of learning at EuroSTAR, don’t forget to plan some time to explore the vibrant cityscape just outside the conference doors. Stockholm is brimming with character, and offers a plethora of experiences that seamlessly blend its rich history with modern innovation. Explore history, culture, and nature – from scenic waterways, to gorgeous green spaces and mighty museums. Here are the top 10 things to do in Stockholm.

book ticket to eurostar

Gamla Stan (Old Town)

Begin your Stockholm adventure by stepping back in time at Gamla Stan. Wander through narrow cobblestone streets lined with colorful buildings, explore the Royal Palace, and soak in the medieval charm of the oldest part of the city.

Vasa Museum

Dive into maritime history at the Vasa Museum, home to the only preserved 17th-century ship in the world. Marvel at the intricate craftsmanship of the Vasa warship, which sank on its maiden voyage and was salvaged centuries later.

A boat installation at the Vasa Museum in Stockholm
The Vasa is the best-preserved seventeenth-century ship in the world

Skansen Open-Air Museum

Experience Swedish culture and heritage come to life at Skansen, the world’s oldest open-air museum. Encounter traditional Swedish dwellings, meet native wildlife, and witness craftsmen at work, providing a glimpse into Sweden’s past.

Fotografiska

Delve into the realm of contemporary photography at Fotografiska. Admire thought-provoking exhibitions from both established and emerging artists while enjoying panoramic views of Stockholm’s waterfront.

Djurgården Island

Escape the hustle and bustle of the city and retreat to Djurgården Island. Whether you fancy a leisurely stroll, a bicycle ride through lush greenery, or a picnic by the water, Djurgården offers a tranquil oasis in the heart of Stockholm.

ABBA: The Museum

A treat for ABBA fans! Immerse yourself in the world of Sweden’s most iconic pop group at ABBA: The Museum. Sing along to timeless hits, explore interactive exhibits, and unleash your inner dancing queen.

The ABBA museum in Stockholm
Have the time of your life at the ABBA museum!

Royal Djurgården Park

Embrace the serenity of nature at Royal Djurgården Park. Take a leisurely stroll along picturesque pathways, marvel at lush gardens, and encounter iconic landmarks such as the Rosendal Palace and the Kaknäs Tower.

Stureplan District

Indulge in Stockholm’s vibrant nightlife scene at Stureplan. Rub shoulders with locals and fellow conference attendees at chic bars, trendy clubs, and cozy pubs, ensuring an unforgettable evening in the Swedish capital.

City of Stockholm evening at the Nobel Museum

The City of Stockhom evening is part of the optional networking experience for EuroSTAR attendees. Step into the iconic City Hall, and enjoy an enchanting evening in the Nobel Prize ceremony venue with a Swedish buffet, drinks and community connections.

Stockholm Sunset Cruise

As an extra bonus, enjoy an exclusive sunset cruise departing from City Hall, offering a unique perspective of Stockholm’s city skyline. Cruise capacity is limited. Secure your spot now to add this bonus to your evening.

There’s so much to explore, and lots of memories to be created – book your EuroSTAR ticket now, to join your testing peers in Stockholm this June.

book eurostar tickets

Filed Under: Uncategorized

Software Testing In Regulated Industries

February 27, 2024 by Lauren Payne

In today’s landscape of digital adoption and the rapid growth of software technologies, many domains leveraging technology are within regulated industries. However, with the introduction of more technology comes the need for more software—and more software testing. This article will touch on the unique attributes, challenges, and considerations of software testing within these regulated domains.

Defining “regulated” industries

While many industries have specific guidelines and domain nuances, we will refer to “regulated” industries as those that are governed by overarching regulatory compliance standards or laws. 

These governance standards in most cases impact the depth, agility, and overall Software Development Lifecycle (SDLC) on how these standards are developed into requirements and then validated.

Below is a sampling of some of these domains:

  • Healthcare
  • Manufacturing
  • Banking/Finance
  • Energy
  • Telecommunications
  • Transportation
  • Agriculture
  • Life sciences 

Unique requirements

Common characteristics that teams will likely encounter when analyzing the software quality/testing requirements in these environments include:

  • Implementation of data privacy restriction laws (like HIPAA)
  • Detailed audit history/logging of detailed system actions
  • Disaster recovery and overall data retention (like HITRUST)
  • High standards for traceability and auditing “readiness”
  • Government compliance and/or oversight (like the Food and Drug Administration / FDA)

These common regulatory requirements are critical for planning and executing testing and establishing a quality of recording artifacts essential to supporting auditing and traceability.

Testing considerations & planning

Many testers and their teams are now being proactive in using paradigms such as shift-left to get early engagement during the SDLC. As part of early requirements planning through development and testing, specialized considerations should be taken within these regulated industries.

Requirements & traceability

  • The use of a centralized test repository for both manual and automation test results is critical
  • Tests and requirements should be tightly coupled and documented
  • Product owners and stakeholders should be engaged in user acceptance testing and demos to ensure compliance
  • Test management platforms should be fully integrated with a requirement tracking  platform, such as Jira

Image: The TestRail Jira integration is compatible with compliance regulations and flexible enough to integrate with any workflow, achieving a balance between functionality and integration.

Once teams have solidified a process for defining and managing requirements and traceability, it becomes imperative to ensure that the quality of test records is not only accessible but also restricted to those who require it. 

This controlled access is crucial, particularly in auditing situations, where the accuracy and reliability of test records may play a critical role. This approach for access controls is commonly referred to as the “least privilege” principle.

Image: With TestRail Enterprise role-based access controls, you can delegate access and administration privileges on a project-by-project basis

Test record access controls

  • Limit test management record access to the minimum required for team members
  • Ensure only current active team members have test record access
  • Implement a culture of peer reviews and approval to promote quality and accurate tests

Image: TestRail Enterprise teams can implement a test case approval process that ensures test cases meet organizational standards.

As test cases and test runs are created manually or using test automation integrations like the TestRail CLI, it is important to maintain persistent audit logging of these activities. Within regulated industries, audit requirements and “sampling” may require investigation of the history and completeness of a given test that was created and executed against a requirement.

Image: TestRail Enterprise’s audit logging system helps administrators track changes across the various entities within their TestRail instance. With audit logging enabled administrators can track every entity in their installation.

Audit history

It’s important to maintain a log that allows viewing of historical data on test case creation and execution. This supports audit readiness for requirements validation traceability.

Lastly, as teams focus on the development, testing, and delivery of software, we have to be mindful of disaster recovery and data retention of the artifacts we create. 

In the same thought process as disaster recovery of a given system under test, the quality of records for testing and release must persist to support compliance requirements and audits. Although centralized test management platforms with integrated restore capabilities are preferred, various tools and processes can be used to achieve this.

Image: TestRail Enterprise’s configurable backup and restore administration features enable administrators to specify a preferred backup time window, see when the last backup was completed, and restore the last backup taken.

Self-assessments & internal auditing

For all teams that are iterating on engineering, testing, and overall SDLC improvements, it’s important to dedicate time to perform self-assessments. 

Self-assessments in the context of software testing and quality in regulated environments can be a highly effective tool for identifying process gaps and shortcomings. 

Self-assessment/audit evaluation criteria

Examples of critical areas to include in your self-assessments or audit readiness exercises include:

  • Having full traceability via linkage of all tests to the corresponding requirements​ artifact (such as a Jira issue or defect)
  • Tests that have been planned and executed are linked to a given release event/designation
  • Failed tests for a given release or sprint are linked to a defect artifact (such as a Jira defect)

Once a self-assessment or internal audit is performed, ensure that the team collects actionable information such as improvements to requirements traceability or more detailed disaster recovery documentation that can be used to improve the overall SDLC with a focus on core compliance best practices and standards.

Bottom line

Additional considerations and requirements must be made across the SDLC when operating teams within regulated industries. The early inclusion of these additional requirements with all team members is critical to ensuring compliance and overall success in audits and other regulatory assessments. 

Key Takeaways

  • Focus on traceability, ensure linkage of tests to requirements
  • More focus on security and access controls testing
  • Centralize all test artifacts in a repository with backups/data retention
  • Plan and execute disaster recovery validation

Watch the Testing In Regulated Industries webinar on the TestRail Youtube channel for more information on the unique challenges and characteristics of software testing in regulated industries!

Author


Chris Faraglia
, Solution Architect and testing advocate for TestRail.

Chris has 15+ years of enterprise software development, integration and testing experience spanning domains of nuclear power generation and healthcare IT. His specific areas of interests include but are not limited to test management/quality assurance within regulated industries, test data management and automation integrations.

TestRail is an EXPO Gold Sponsor at EuroSTAR 2024, join us in Stockholm.

Filed Under: Gold, Software Testing, Sponsor, Uncategorized Tagged With: 2024, EuroSTAR Conference, Expo

How to overcome common challenges in Exploratory Testing

February 20, 2024 by Lauren Payne

Exploratory testing involves testing system behaviour under various scenarios, with a predefined goal but no predefined tests. This focus on discovering the unknown makes exploratory testing both powerful and challenging.

“Exploratory testing is a systematic approach for discovering risks using rigorous analysis techniques coupled with testing heuristics.”

-Elisabeth Hendrickson

Although exploratory testing (ET) is not a new concept, its significance has increased exponentially in the dynamic field of software development. With its simultaneous learning, test design, and execution processes, ET represents a shift from the traditional, script-based testing methodologies. This approach is particularly beneficial in handling the complexities and unpredictabilities of modern software projects. It prepares testers to actively engage with the software, uncovering potential issues that scripted tests might overlook.

In exploratory testing, catching bugs is an adventure – a journey through the unknown aspects of software, where each test can reveal new insights. In the Agile world with rapid development cycles, exploratory testing stands out as a dynamic and responsive testing strategy, essential for ensuring software quality in a fast-paced environment.

Despite its advantages, exploratory testing has challenges that can interfere with its effectiveness. Testers often encounter hurdles in planning and adapting to newly discovered information, managing frequent context switches, maintaining comprehensive documentation, and effectively measuring the success of their testing efforts. Addressing these challenges is crucial for harnessing the full potential of ET. This blog will explore these common challenges and discuss how the Xray Exploratory App provides innovative solutions, enhancing the exploratory testing process and enabling testers to deliver high-quality results efficiently.

How to overcome challenges with Xray Exploratory App

The Xray Exploratory App proves to be a vital resource for successfully navigating these challenges. The tool supports the unique factors of exploratory testing, empowering testers to optimize their testing strategies while maintaining the flexibility and adaptability that exploratory testing demands. 

Planning and Learning

One of the primary challenges in exploratory testing is the balance between planning and learning. While ET is less structured than traditional testing, it still requires a level of planning to be effective. Xray Exploratory App facilitates one of the measures to counter this challenge and optimize your ET adoption –  session-based test management (SBTM). 

Testers must continuously learn from the software they are testing and adapt their approach accordingly. This requires understanding the project’s goals and the ability to quickly assimilate new information and apply it to testing strategies. One of the elements that helps with gaining the skills and experience is the structure of knowledge sharing. For example, if charters are handled as Jira stories, you get a centralized storage (a library of templates, of sorts) that has good examples which help educate any team member about the system and previous ET efforts.

Context Switching

Testers in an exploratory setting often deal with context switches. They must juggle different aspects of the software, switch between various tasks, and respond to new findings in real-time. Managing these switches efficiently is crucial to maintain focus and avoid overlooking critical issues. Beyond common techniques like Pomodoro, you can leverage two key features of Xray Exploratory App – saving sessions locally and editing the detailed Timeline with all your findings.

Proper Documentation

Unlike scripted testing, where documentation is predefined, exploratory testing requires testers to document their findings as they explore. This can be challenging as it requires a balance between detailed documentation and the fluid nature of exploratory testing. Testers need to capture enough information to provide context and enable replication of failure and future test repeatability without getting bogged down in excessive detail.

Xray Exploratory App addresses this challenge with the easily created chronological history of not just text notes but also screenshots, videos, and issues/defects created in Jira during the session (which accelerates the feedback loop).

Reporting and Measuring Success

Another significant challenge in exploratory testing is effectively reporting and measuring success. Traditional testing metrics often do not apply to ET, as its dynamic nature does not lend itself easily to quantitative measurement. Defining meaningful metrics to capture the essence of exploratory testing’s success is crucial for validating its effectiveness and value within the broader testing strategy. In many cases, such definitions would be very company-specific.

The good news – the seamless integration between Xray Exploratory App and Xray/Jira allows you to leverage centralized test management features, such as real-time reporting on several possible metrics (e.g. number of defects, elapsed time). That improves visibility and allows to clearly determine the status of not only exploratory testing, but all testing activities.

For instance, if we want to track defects/issues resulting from exploratory testing, we can see them linked to the test issue in Jira/Xray, which will then allow us to check them in the Traceability report. 

Overall, these challenges, though daunting, are manageable. With the right approach and tools, testers can navigate the complexities of exploratory testing, turning these challenges into opportunities for delivering insightful and thorough software testing.

Future outlook of Exploratory Testing

Exploratory Testing is becoming more acknowledged as an indispensable part of the testing strategy, especially given the limitations of conventional scripted testing. The ability of ET to adapt and respond to the complexities and nuances of modern software development is exceptional. As we look towards the future, several key trends are emerging that are set to shape the landscape of exploratory testing.

Artificial Intelligence (AI)

AI has the potential to significantly transform exploratory testing by automating certain aspects of ideation and, more so, data analysis processes. Leveraging AI in software testing in the correct way can enhance the tester’s capabilities, enabling them to focus on more complex testing scenarios and extract deeper insights from test data. AI can assist in identifying patterns and predicting potential problem areas, making ET more efficient and effective.

Integrations with other tools

The future of exploratory testing will see greater integration with various development, testing, and business analysis tools. This compatibility will streamline the testing process, enabling seamless data flow and communication across platforms. One of the pain points this trend will aim to address is losing time in writing automation scripts as a result of ET. Such integrations will enhance the overall efficiency of the testing process, allowing testers to leverage a wider range of tools and resources during their exploratory sessions more easily.

Enhanced collaboration

As software development becomes more collaborative, exploratory testing also adapts to facilitate better teamwork. Tools like the Xray Exploratory App incorporate features that promote collaboration among testers and between testers and other stakeholders. This collaborative approach ensures a more comprehensive understanding and coverage of the software, leading to better testing outcomes.

Compliance and reporting

Exploratory testing is being used more and more in ensuring compliance, areas like Non-Functional Requirements testing (security and performance), to help find more convoluted flaws and bottlenecks in intricate software systems. The trend is not surprising as the cost of compliance is increasing, both from the customer and the regulatory perspective. 

With the increasing emphasis on compliance and accountability in software development, exploratory testing has to evolve to provide more robust reporting and documentation capabilities. The ability to generate detailed and meaningful reports is essential, and tools like Xray are focusing on enhancing these aspects to meet the growing compliance demands.

The Xray Exploratory App is at the forefront of these changes, continually adapting and evolving to meet the future demands of exploratory testing.

Chart new heights in testing with Xray Exploratory Testing App

Exploratory Testing has become indispensable in our increasingly sophisticated and customer-centric digital landscape. Its importance has expanded across various sectors, including e-commerce, healthcare, and finance, highlighting the universal need for high-quality software experiences. The unique approach of ET, with its focus on discovering the unknown through rigorous analysis and testing heuristics, positions it as a key strategy in addressing the complexities of modern software systems.

The Xray Exploratory App stands out as a vital resource in harnessing the full potential of exploratory testing. The tool enhances the testing process by addressing the everyday challenges of planning, context switching, documentation, and reporting. It enables testers to navigate the intricacies of ET with greater efficiency and effectiveness, ensuring comprehensive coverage and insightful test results.

Explore the capabilities of the Xray Exploratory App and see firsthand how it transforms the exploratory testing experience. Dive into the world of enhanced software testing with Xray and discover the difference it can make in delivering superior software quality.

Author


Ivan Filippov
, Solution Architect for Xray.

Ivan is passionate about test design, collaboration, and process improvement.

Xray is an EXPO Platinum partner at EuroSTAR 2024, join us in Stockholm.

Filed Under: Exploratory Testing, Platinum, Software Testing, Sponsor, Uncategorized Tagged With: 2024, EuroSTAR Conference, Expo, software testing conference, software testing tools

The Silver Bullet for Testing at Scale

August 21, 2023 by Lauren Payne

Thanks to Testory for providing us with this blog post.

Testing has always been a bottleneck in the development process. Since product teams often sacrifice time spent testing, the workload testers face ebbs and flows.

Your company’s testers most likely know what it’s like to work weekends and evenings when there’s a release coming up. At points like those, they generally have to take on low-level work to make sure they check everything and deliver a high-quality product. But that overworks them and leads to burnout.

Product teams often think about the silver bullet: how do you scale testing (increase capacity) instantly without just throwing money at the problem?

Before we answer that question, however, we should take a step back and look at the big picture. What challenges are inherent to testing?

Testing requirements by role

CTOProduct managerHead of testing
Faster time to marketYesYes–
Budget optimizationYesYes–
Product Quality for CustomersYesYesYes
Peak loads––Yes
Routine tasks––Yes
Variety of testing enviroments––Yes

Every role has its own problems. How do you solve them all at the same time?

A few years ago, we took a systematic approach to testing challenges, eventually coming up with a product for the largest IT company in our region. The solution married a variety of ML and other algorithms with traditional IT tools (Tracker, Wiki, TMS) and thousands of performers scattered across different time zones. That eliminated the bottleneck. With a dozen product teams online, they could scale testing or remove it altogether based on need.

On the one hand, we’re constantly improving our algorithms to give better feedback faster. On the other, our automated system selects professional testers who guarantee that same great result.

Another advantage our system offers is that it stands up well to load spikes around the clock rather than just during regular working hours.

Let’s look at an example. In February 2023, a large customer handed Testory a process that included 2240 hours of work, 1321 of which were outside business hours.

As you can see on the graph, the load placed on testers was anything but even. There are a thousand reasons why that could be. Some peaks outpaced the capacity of a full-time team working regular hours, though expanding the team would have resulted in team members sitting around the rest of the time.

All that makes sense on the graph. The red line represents hours, with eight full-time employees sufficient to cover the total of 65. As you can see on the graph, the load was more frequently heavier, meaning that team of eight wouldn’t be up to the task, though there were also times were they wouldn’t have had enough work.

How does it work?

The customer embeds crowd testing in their development pipeline, calling the process from their TMS as needed and running regress testing in our product with external testers.

When they submit work for crowd testing, our algorithms scour our pool to select the best performers in terms of knowledge, speed, and availability, then distributing tasks so we can complete a thorough product test in the shortest possible time. We then double-check the result, compile a report, and send the report to the customer. That’s how we fit N hours of work into N/X hours.

The customer can scale up testing whenever they want, then scaling back and paying nothing when they don’t have work to do. It’s an on-demand service.

Performers enjoy an endless stream of work that’s perfect for their skill set in addition to some that pushes them to learn and grow. For our part, we offer testers special skill- and knowledge-based courses, stable payment that depends on how many tasks they complete, and the opportunity to work from anywhere in the world.

What’s the bottom line?

We free up resources our clients can rededicate toward interesting and higher-risk work, help out with peak loads, and streamline costs:

How can you get that for yourself?

Testory is a separate process and product born to help large companies. It’s for anyone trying to quickly deliver IT products that solve user problems. If you’re interested in leveraging our experience, get in touch, and we’ll build a roadmap for you.

Author

Mary Zakharova

Mary has been working with crowdtesting products for 6 years. She started her career as a community manager in a testers’ network.

In recent years, Mary has been in charge of the Testory product

Testory is an EXPO Exhibitor partner at EuroSTAR 2023

Filed Under: Software Testing, Uncategorized Tagged With: 2023, EuroSTAR Conference

Orchestrated Testing Within Continuous Delivery

August 7, 2023 by Lauren Payne

Thanks to Sixsentix for providing us with this blog post.

Over the last few years, the market has put great effort into delivering solutions as fast as possible. The software has transitioned from having a supportive role in business to becoming a crucial part of the business itself. For instance, e-banking platforms have enabled clients to complete the job on their own. For many companies, this meant building new layers of software applications on top of the previous system of record, like CRMs or ERPs.

So, what’s the motivation behind this? Firstly, they are trying to differentiate themselves from the competition with faster and more flexible solutions. Secondly, they are disrupting the market with new innovative solutions. To go back to the banking example, many financial institutions are nowadays creating new brands (companies, applications, services, etc.) to target new segments of the market or even to create new niche markets.

The Problem: Excessive focus on the system of innovation (and disregard for other systems)

On the one hand, utilizing the systems of records to create a new service can be challenging to facilitate a quality release in a short period of time. For example, systems of record can be heavily impacted by regulatory mandates or using legacy technologies with old software architecture patterns.

Then, companies have rolled out the Agile Delivery Frameworks used in the systems of innovation, expecting to have the same outcome. But is this really possible? Are all the organizations able to become the next Spotify? In our experience, it’s not so easy. There are some serious challenges that need to be overcome:

  • The systems coexist, but some of our clients do not even notice that.
  • The same Agile Delivery framework does not suit all the systems, even in the same company or organization.
Source: Ketut Subiyanto

The Solution: Orchestrating testing between all systems

On the other hand, a new app can be done overnight within system innovation. Thus, time constraints, as well as the level of dependencies, are crucial attributes for a faster release among the three system platforms.

So, what do we do? Should we slow down the innovation to onboard the systems of differentiation and record onto our model? Definitely not!

Sixsentix’s approach is to use QA and specially the test architecture discipline as the orchestrator between systems. The main purpose of test architecture is to prepare the systems of differentiation and record to keep up with the pace of the system of innovation or even increase it! Our client portfolio consists of mid-sized and large companies from diverse business domains with one thing in common – most of them have developed new systems of differentiation and innovation very quickly. Here are some of the crucial lessons we’ve learned so far:

  • Risk-based testing brings two main benefits when testing applications within the system of record. On one side, it plays the role of guardian of quality for the system of record. On the other side, it helps the system of innovation to get faster evidence and, consequently, make early decisions whether to release to production or not.
  • Each system needs a different type of testing strategy, and each test strategy must consider the coexistence with the other systems. One testing strategy (i.e., approach, infrastructure, tooling) does not fit all the systems.
Source: Sixsentix’s adaptation of Gartner’s PACE Layered Application Strategy 

The Sixsentix Way: Using test architecture service to bridge the gaps

To further illustrate these ideas, let us consider the situation at one of our client companies, where we identified a lot of dependencies between the system of innovation (i.e., mobile apps) and the system of record (i.e., core business CRM).

  • Before implementing our test architecture service, we spotted the following symptoms:
  • Overload of delivery backlogs
  • Dependencies between agile teams consumed almost all the development capacity
  • Delivery objectives (time to market) could not be accomplished
  • Detection of side-effects in production environment
  • Huge effort on consolidating test evidence for audit-relevant systems

But after implementing the service, we could observe a number of improvements:

  • Quality Assurance supports faster releases with risk-based testing
  • Test automation degree was massively improved, allowing Continuous Testing
  • Audit-relevant test evidence is delivered efficiently thanks to methodological test coverage
  • Throughout business-facing testing, the dependencies are better understood and therefore the backlogs of all three systems are better aligned and prioritized
  • On an organizational level, shift-left has been enabled
Source: Envato

This perspective on frequent SDLC challenges is the result of Sixsentix experience, by consulting and operationalizing QA solutions at large scale organizations. If you wish to find out more about how test architecture can help bridge gaps between the three systems, find us at the Sixsentix booth. We look forward to discussing this topic and exchanging ideas about QA and software testing with you at the EuroStar conference!

Author

Sixsentix

Sixsentix is a leading provider of Software Testing Services, QA Visual Analytics and Reporting, helping enterprises to accelerate their Software Delivery. Our unique risk-based Testing and QACube ALM Reporting and Dashboards, provide business with unprecedented quality and transparency across Software Delivery projects for faster time-to-market. Sixsentix customers include the largest banks, financial services, insurance, telecom providers and others. Sixsentix Onsite, and Nearshore (SWAT) services deliver optimized testing outcomes at significantly lower costs and help customers with scalability to keep pace with digitalization.

Sixsentix is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp.

Filed Under: Test Automation, Uncategorized Tagged With: EuroSTAR Conference

Why Crowdtesting Should be an Imperative Pillar of Quality Assurance

August 2, 2023 by Lauren Payne

Thanks to MSG for providing us with this blog post.

Users are looking for products that inspire – or at least don’t bother them

Future generations – all of them digital natives – will no longer enter their business relationships as traditional customers. The changed demands and the constant transformation through digitalization are turning customers into users. But where no human interaction can create trust, dispel doubts, and answer questions, the product alone is in the spotlight and must have the ability to convince in a very short time and with a reduced attention span.

Attractive, easy to use and – best of all – with a higher range of functions.

Constantly available and nearly unlimited offerings are no longer disruptive but common standards. This applies to products, services, and public offerings at the same time. So, whatever your offer is, you must make sure, the users find it attractive, easy to use and with a suitable range of functions.

The users – not a homogeneous mass

Another challenge is to meet the different target groups and to create a digital infrastructure that covers their different needs equally. Those of Generation Y and Z, which have the purchase power and demand of the future expect modern forms of interaction, purchasing products and services fully digital. The future “everything is now” generation, which is no longer tied to long-term contracts and is used to getting whatever they are looking for on demand.

The competition among web-offerings, which compete without ties and with the promise of a “change of supplier in minutes”, meets this need. The time span to inspire or disturb new users is accordingly very short. Not at least because the tolerance for errors also decreases with the rising use of digital products. By now, most users have gained so much experience with apps and online products that they have a clear expectation of functions and usability. If these expectations are disappointing, they simply download the next app. And even if this is sometimes tied to opening an account, today this can be done quickly enough and with reasonable efforts.

The subjective experience counts

As good as product design and functionality may be, the product experience is and remains subjective. Every product will always create a subjective use case for the user, and this must work to store a positive experience.
A subjective use case could be that a user carries out his transactions exclusively while commuting on a mobile device and expects for instance a banking app to be compatible with his mobile device. The app should be so intuitive to use that external distractions do not disrupt the user flow and ideally the data flow should adequately handle the switch from 3G/4G mobile networks to WLAN networks. If all this fits, the experience is consistently positive.

This in turn not only brings the advantage that the individual user is satisfied, but providers also benefit from the fact that an experience is always communicated to others.

Position yourself on the market through assured quality

By assessing the product quality, you may influence your positioning on the market towards an outstanding product experience. This inherits the following to be ensured:

  • The smooth functionality of the product on the most popular devices in the market.
  • The provision of the appropriate range of functions with the right characteristics for the target group.
  • Covering as many subjective use cases as possible to avoid negative surprises after go-live.

While the first point can still be tested internally and in the laboratory, for example with emulated devices, as part of a verification, the other two points can only be tested as part of a validation.

Crowdtesting offers solutions

Crowdtesting is the validation of digital products involving your target group – remotely via the internet. Leaving this rather rigid definition behind, this method offers good tools to meet the three challenges of digital assurance. It allows positioning towards the upper right quadrant of digital excellence and thus can serve to stand out from the masses with an outstanding product.

Figure. 1: The quadrants of digital excellence

Crowdtesting helps you to cover subjective use cases and perceptions in any phase of the life cycle. You get a direct insight into whether your target group feels heard and can adapt at any time. In addition, with the variety off devices and mindsets added to your testing process you will be enabled to find functional and technical issues which wouldn’t be uncovered in the lab. And if there are no functional problems, that’s worth a pat on the back for your development and builds confidence in your product.

Feedback will always be a part of this testing process and even if the insights and “bugs” gathered in this process may not be fixed, they can be incorporated into the further development of the product. In the meantime, the results help customer support to prepare for possible enquiries and to create meaningful FAQ lists.

Conclusion – Crowdtesting is useful in any phase of a products lifecycle

It gives a good insight into the technical and functional stability of your product and provides the opportunity to understand the (future) users from the beginning and develop with a focus on their added value. You don’t have to wait for feedback from customers who may be disappointed once, not return to your site at all and not using your app a second time.

Author

Johannes Widmann

Johannes Widmann has been working in the field of software quality and digital assurance for over 22 years. He is a dedicated desciple of crowdtesting since 2011 and has built up passbrains, one of the leading service providers for crowd-sourced quality assurance. Since January 2021 passbrains is part of the msg group.

MSG is an EXPO Exhibitor at EuroSTAR 2023, join us in Antwerp

Filed Under: Quality Assurance, Uncategorized Tagged With: 2023, EuroSTAR Conference

We’ve got the Stage – You’ve got the Story

July 17, 2023 by Lauren Payne

The 2024 EuroSTAR Software Testing Conference is going to Stockholm, Sweden.

If you’ve ever wanted to speak at EuroSTAR and share your story on Europe’s largest stage, the Call for Speakers is open until 17th September.

Now is the time to start thinking about what you’d like to share. What experiences will help others in the room. Perhaps it’s something that didn’t work at first but then you found a solution. It might be technical, or it might core skills.

EuroSTAR 2024 Programme Chair, Michael Bolton, is inviting you to explore the theme, ‘What Are We Doing Here?’ – it’s a wide-open question, with lots of possible interpretations and related questions.

Talk Type

We’ll share more on these later but for now, there will be three main types of talks:

  • Keynote – 60mins (45mins talk + 15mins Q&A)
  • Tutorials/Workshops – Full-day 7 hours OR Half-day 3.5 hours incl breaks
  • Track Talks – 60mins (40mins talk + 20mins valuable discussion)

Who?

Calling all testing enthusiasts and software quality advocates – whether you’re a veteran, or new to testing – to share your expertise, successes (and failures) with your peers; and spark new learnings, lively discussions, and lots of inspiration.

Think about what engages you in your work, engrosses you in testing, challenges you’ve faced, or new ideas you’ve sparked? Get in front of a global audience, raise your profile, and get involved with a friendly community of testers.

Here’s everything you need to know about taking the first step on to the EuroSTAR stage.

We invite speakers of all levels to submit their talk proposals and take the biggest stage in testing!

What Do I Need To Submit?

A clear title, a compelling abstract and 3 possible learnings that attendees will take from your talk – this is the main part of your submission. We’ll ask you to add in your contact details and tick some category boxes but your title, talk outline & key learnings are the key focus.

Topics for EuroSTAR 2024

Michael is calling for stories about testers’ experiences in testing work. At EuroSTAR 2024, we embrace diversity and value a wide range of perspectives. We’re most eager to hear stories about how you…

  • learned about products
  • recognised, investigated, and reported bugs
  • analysed and investigated risk
  • invented, developed, or applied tools
  • developed and applied a new useful skill
  • communicated with and reported to your clients
  • established, explained, defended, or elevated the testing role
  • Created or fostered testing or dev groups
  • recruited and trained people
  • made crucial mistakes and learned from them
START Your Submission

Mark Your Calendar

Here are some essential dates to keep in mind:

  • Call for Speakers Deadline: 17 September 2023
  • Speaker Selection Notification: Late November 2023
  • EuroSTAR Conference: 11-14 June 2024 in Sweden

If you’re feeling inspired, check out the full Call for Speakers details EuroSTAR attracts speakers from all over the world and we can get over 450 submissions. Each year, members of the EuroSTAR community give their time to assess each submission and their ratings help our Programme Committee select the most engaging and relevant talks. If you would like help writing a proposal see this handy submissions guide and you can reach out to us at any time.

EuroSTAR 2024 promises to be an extraordinary experience for both speakers and attendees. So, submit your talk proposal before 17 September 2023 and let’s come together in the beautiful city of Stockholm next June. Together we’ll make EuroSTAR 2024 an unforgettable celebration of software testing!

Filed Under: EuroSTAR Conference, Software Testing, Uncategorized Tagged With: EuroSTAR Conference

  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • …
  • Page 8
  • Next Page »
  • Code of Conduct
  • Privacy Policy
  • T&C
  • Media Partners
  • Contact Us