Bloggo back to the blog
Test Leaders Live Webinar Q&A from Michael Bolton-->
Here are questions and answers from my Webinar, Ideas About Regression Testing. If you have follow-up questions, please feel free contact me at [email protected]
Erna Terceor: What do you mean by “low-level checking”?
A check is an observation of a piece of software. The observation is linked to a decision rule. The observation and the decision rule can both, at least in theory, be applied by a machine. A check is like an atomic unit of testing; a check returns a bit-one or zero, yes or no, true or false, pass or fail, green or red.
A low-level check is one that checks things closer to the program code, rather higher-level checks that probe a larger system.
I’ve written quite a bit about that in my blog, with the more important postshttp://www.developsense.com/blog/2009/08/testing-vs-checking/,http://www.developsense.com/blog/2009/09/transpection-and-three-elements-of/, andhttp://www.developsense.com/blog/2009/11/merely-checking-or-merely-testing/
Sudhir Patil: So is it appropriate to say we don’t automate tests but we automate checks?
Appropriateness is up to you. I say that we can automate checks, because a check doesn’t require sapience-human judgement and wisdom. We don’t automate tests, because tests are more than artifacts or assertions. Tests are performances that require cycles of risk analysis, design, execution, evaluation. Those activities require sapience, human intelligence and judgement. They cannot be automated.
Jean-Paul Varwijk: Is a check that verifies behaviour of a complex model and of which the result can only be judged by a human still a check or could we call that a test?
You can call it whatever you like, of course. By my definition, though, a check is something that doesn’t require human judgement for execution and evaluation; that gets decided in advance, as part of the design of the check. As soon as you bring human analysis into it, you’re into testing.
Jason Kowaluk: What’s the ideal ratio between check and test for regression?
I’m afraid there’s no good answer for this question, for several reasons. I’m not aware of any scale by which we could hang meaningful numbers on either tests or checks, and a ratio of the kind you’re looking for is a comparison of two numbers on the same meaningful scale. Tests and checks aren’t the same thing at all, so they’re not commensurate-and yet they overlap, in that checking is part of testing; testing dominates checking. See my testing vs. checking blog posts in my answer to Jean-Paul above. You might also want to look at my colleague James Bach’s blog posts on : “Manual tests cannot be automated” and “Sapient Processes“.
Moreover, every company, every development team, every product, every project, and every risk model is different. When you’re testing, your context guides your choices, and both of those change over time as the product and what you know about it changes. So apart from being no ratio, there’s no ideal either. I’d have to know something about your situation and the problems you’re facing before I could supply an answer that serves you well, and even then, it wouldn’t be a ratio per se. It would be more a set of focusing heuristics that address multiple risks, including the risk of regression.
You might find this a pretty discouraging answer so far, but there’s some hope in the kind of answer James provides in “What Size Unicorn Do You Wear?“. With that, I’m happy to continue the conversation and try to get to the heart of the problem you’re trying to solve. Please drop me a line in email ([email protected]) or on Skype (michael.a.bolton), and I’ll do my best to help you out.
Jonathan Aibel: Not really a question — I’d just like to hear Michael give a talk sometime about metrics to use to measure the regression testing effort.
Rachel Warrington: If we are using exploratory testing for regression testing, how can we measure and understand our coverage?
Karen Li: Michael, can you give us some advices on regression test metrics besides test coverage?
Thanks for the suggestion, Jonathan. I’d like to do that too. For now, I have a few suggestions, for Rachel and Karen.
I wrote a few articles on measurement for Better Software Magazine. Have a look athttp://www.developsense.com/publications.html#WhatCounts;http://www.developsense.com/publications.html#IssuesAboutMetrics ; andhttp://www.developsense.com/publications.html#ThreeKindsOfMeasurement
Also, make sure you have a look at Kaner, Cem, and Walter P. (Pat) Bond. “Software Engineering Metrics: What Do They Measure and How Do We Know?”
In that paper, Kaner and Bond define measurement as “the empirical, objective assignment of numbers, according to a rule derived from a model or theory, to attributes of objects or events with the intent of describing them”. A metric, they point out, is a measurement function; that is, a metric is the set of operations and/or formulae that allow you to hang a number on an observation.
My advice is to start by getting straight on some questions: What are you trying to observe? What attribute of the regression effort do you want to describe? What is it about the regression effort that you want to understand? Hours spent on regression testing? A story about the relative proportion of regression problems vs. other problems? Is there a reason to distinguish regression-focused testing from testing focused on other risks? Do you want to tell a story about coverage?
If so, I also wrote several articles on that question (you can find these and more athttp://www.developsense.com/publications.html):
Torkjel Austad: With exploratory testing in Agile environment that is also blessed with “regression friendly” situations… how do we set our priorities as due to Agile things are changing quickly and it is at times impossible to unveil regression with exploratory testing only!
I see two senses of “we” in that question. We-the-testers are providing a consultative service to we-the-project-team, the clients of our testing. I can’t answer your question specifically, for the same reason I can’t answer Jason’s above, but I can suggest this: if we know we’ve got a regression-friendly environment, it’s a mistake to believe that we-the-testers are going to find problems that we-the-project-team are introducing. So we-the-project-team might want to set its priority on addressing the regression-friendly environment problem.
Lots of people suggest that having programmers do plenty checking is the way to go, and that’s probably a good idea. But there are other ways to address regression problems: collaboration, pairing, and review; refactoring of confusing or crufty code; decoupling of overly interdependent modules; and maybe slowing down. Each comes with its own costs, benefits, and limitations, but each supports the other too.
Ramapriya Bharathan: When would be apt time to do a regression testing for software project with Agile methodology?
Richard Bradshaw: When should we do regression testing? End of sprint? Along side sprint testing? Before a release?
I’d suggest doing regression testing as soon you think that there’s a risk of regression and a chance of detecting it at reasonable cost. The end of the sprint is probably a good idea. Alongside sprint testing sounds good too. Before a release sounds pretty good as well. On top of all that, you might want to start creating checks before writing the code (that’s what drives the TDD idea, and BDD and ATDD too, but it’s an idea that has been around as long as computers themselves have been).
I think a much better question to ask-one that dominates the question “when should we do regression testing?”-would be “What don’t we know about the product right now, and how should we seek to know it?”
Griffin Jones: When should you do regression? When you want to answer the questions that that form of testing addresses.
Yes, essentially. For me, it’s more productive to think about regression as a risk, rather than as a form of testing, and then use test techniques directed at that risk. It seems to me that pretty much any form of testing is capable of finding regression problems of some kind, and no form of testing is is capable of finding all of them. So diversifying your team, your tactics, and your testing seems like a good idea. Ashby’s Law of Requisite Variety is the motivation for that.http://en.wikipedia.org/wiki/Variety_%28cybernetics%29#The_Law_of_Requisite_Variety
Ruud Cox: In your presentation you focus on regression of the product due to changes to the product. But what if the environment changes e.g. competition has a new version of their product the value of your product might regress compared to that product. Would you count that as regression of the product?
I don’t think many people would think of it that way. But I think it’s an interesting perspective, one that underscores the notion that regression is regression compared to something. Regression is subject to the Relative Rule and the Unsettling Rule.http://www.developsense.com/blog/2010/09/done-the-relative-rule-and-the-unsettling-rule/. Usually we think of regression as going backwards, but your broad interpretation of regression, regression with respect to the competitive space, reminds me of the Red Queen’s race in Point 2 here:http://www.developsense.com/blog/2012/09/premises-of-rapid-software-testing-part-3/.
Alon Fridman Waisbard: About tests that yield new information – when running tests on changed software, aren’t all of them trying to yield new information? (“is it STILL working?”). Isn’t all this differentiation based only on our expectations of which area in the software got worse?
Yes: testing is about trying to reveal new information. In one sense, all software is changed software. We start with nothing, and the first build is a change from that zero state. Each build after that is a change. So you could say that all testing is about finding out whether things are better or worse than they were, or better than nothing.
For me, though, testing is usually, mostly, about trying to find problems, not to show that the product is (still) working. No check can do that; no test can do that either. When you say that “the product is working”, that’s short for “The product appears to meet some requirement to some degree, in some circumstance, based on some set of conditions, some of which we’re aware of, and some of which we aren’t. This time. On my machine.” You can’t show that a product is working; you can only attempt to find problems. If you don’t find problems, a product owner may infer that the product is good enough, but it would be unwise to do that unless you also knew something about the quality.
Regression is a risk. It’s a risk among many other risks. A problem in the product is a problem, whether a regression or not. So instead of thinking that regression testing as something you do, it might be more productive to think of regression as one of the risks that you’re testing for. Mind you, if people are going to use the information that testing reveals, it might be a good idea to think of regression as a risk that you want to manage by doing more than testing alone.
If you would like to view the recording of Michael’s webinar, you can do so at the EuroSTAR YouTube Channel.