Quality feedback loops
You can get the best results from your quality program by putting in some effort to keeping your feedback loops tight.
Consider this example, of a software application or firmware release. Unit tests, which are typically run by the developers on their functions after any changes, are superb. The time delay is nearly zero, and the developer can investigate any failures nearly immediately, and often with the help of the unit test framework and any debugging tools that are part of their toolchain.
A short feedback loop has the huge benefit that the problem is still fresh in the developer's brain and they don't have to re-learn things that they were doing days or weeks or months ago!
Automated tests are great, and are a huge growth area. These can holistically test the whole application or firmware package, and simulate user interactions. They are often run when each build is finished, unless the previous run is still going on. Sometimes they're run daily, kicked off by hand, or at other intervals.
Manual testing is still very valuable, especially when it comes to audio or video quality or any kind of subjective test. Automation can detect problems with a test, but a human tester will also very often catch problems that aren't specifically related to a test they're performing, for example a screen-drawing problem that the automation wasn't programmed to watch for.
Many larger software projects have a beta program, where software is released early to a handpicked set of users, who have typically signed an NDA or confidentiality agreement. Selecting users based on the quality of their past bug reports, or their value as a customer is common. As you can see in the diagram, though, it can be challenging to fix problems reported at this late stage.
Finally you get to your end users: the longest feedback loop of all.
It happens. You did all the testing you could, but after the product is released, the complaints start ramping up. You missed a serious problem. A "hot fix" is often implemented to get your users a working version without having to wait for the next release. Starting with the build that was released, limit your fixes strictly to just the one big bug. Resist putting in anything else! They will just expand the risk and the amount of time you need to test!
Now, update the other tests in the loop so that next time you'll catch that problem way earlier!
I find one danger with automated tests is the often-incorrect perception that they are really testing the whole problem space. Some functionality lends itself well to automated testing (e.g., the feature "does not crash"), but other behavior is too complex to meaningfully test in a fully automated way. This can lead to "looking where the light is better" and an overemphasis on the easily-automated tests, leaving it up to your Beta testers and end users to find those difficult problems (which also tend to be the more difficult ones to analyze and fix).
ReplyDeleteAgree; it's very easy to imagine test automation as a fortified wall that no bugs can get through; but automation has profound weaknesses that can be very difficult to notice before they bite! The military has a concept of "defense in depth," where even after a force gets through the perimeter, there are defensive systems that can take care of them. A great way to think about software testing, I think. The automation is the perimeter, but you're going to need some other testing as well.
ReplyDelete