You’re pulling together your software team for a project.
Maybe you’ve written your architecture description and design documentation, and you’ve had all the UML diagrams reviewed against the requirements. Perhaps you have simple prototype software, or have a legacy code base that needs to be carried forward against new requirements…or perhaps you are about to write the first line of code of a grand design.
Your development team gets underway, knocking off the bullet list items or agile stories. The capabilities start coming, but how can you ensure that the software being produced is high quality and is capable of being maintained?
Testing and gathering critical feedback is what makes product development a success.
Long ago the term “unit testing” referred to developer-driven one-off testing of code. The developer might informally write a stub of code to exercise a particularly complex piece of code and push some test cases through it, possibly changing conditions and recompiling it for each test they informally thought of. The stub would be discarded, or if kept its instructions would often be long-lost. QA would take the code for component testing, functional testing or system testing which are often done without visibility into the internal algorithms. If the code needed modification later, there would be little “unit testing” of the changes especially if the changes are done by someone new. There is rarely enough time or resource available for this type of “unit testing” when fixing bugs that have found their way to late stages or worse, out in the field.
More recently, engines have been developed to facilitate the maintenance and reusability of these unit tests. Developers can write them before they code the modules, and they can be run every time code is checked into the code repository where they can reject the submission upon failure. They are simplest to apply to object-oriented languages where the testing can focus exclusively on the object, but they can be applied to more fundamental languages such as C as well.
Virtually all higher-level languages have automated software test capabilities readily available. Some are free to use, or relatively low cost. Most can be applied when the code is compiled, and they generally do not embed themselves in the production build itself so they do not bloat the code base.
No one knows where the edges of the algorithm are better than the author. Simple concepts such as where a counter or memory buffer overflows can be protected up-front, rather than trying to get all of these at a component or system level test.
If a code maintainer has no way of quickly determining whether they have broken a piece of code, they may be inclined to change the code as little as possible.
Consider a scenario where the requirements on a software component have changed, and the original developer is no longer available. The component internal design may be using design patterns that were best suited for the original requirements, but are inefficient or inappropriate for the new requirements.
Yet if someone maintaining the code has to wait until QA can validate a given change, the developer may shy away from such a large-scale algorithm change since they may be introducing problems that cannot be detected until much later (with higher cost). With good unit tests, the developer can make these changes with greater confidence that the changes will not destabilize the code.
For software engineers attempting to use a module of code, a sample unit test can be an easy example of how the module is expected to be called.
What static code analysis can accomplish differs based on the source code language, but typically static code analysis is run on source code or object code and examines the code for potential:
Running static code analysis is like having a code reviewer sanity check the code before burning human capital on it, and often it can teach your intermediate team advanced nuances of your language better than articles or textbooks can.
Static code analysis cannot find all basic bugs by any means, but it does find a fair share. If the developers are able to run your selected tool(s) during development, there is no loss-of-face either.
One head’s up though – if the tools are first applied to an existing code base late in the project, the callouts can be overwhelming. If this is your only option to apply these tools, be ready to have your team go through a large list of callouts to diagnose them and don’t lose heart. The callouts can be prioritized and even just the high priority ones can still save your team a great deal of effort over trying to analyze the equivalent complicated QA bugs even later in the project timeline.
”Software is not quality software until it passes testing." - Danny Aponte, IPS Senior Director, Software Engineering
It is much more difficult to address a software bug after release, rather than catching it with software testing.
The cost of a bug goes up based on how far down the SDLC (Software Development Life Cycle) the bug is found. When a bug is found in production the code needs to go back to the beginning of the SDLC so the agile development cycle can restart.
The Systems Sciences Institute at IBM has reported that “the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase.” This doesn’t even include the frustration and embarrassment.