There are many tools available to developers of high-integrity software to enhance productivity and code quality. Today I will look at some of them in brief. Future articles will explore some of them in more depth.

This is not intended to be an exhaustive list; I discover new tools regularly. I don’t have time to look at even a representative fraction; so if you know of some great ones that I’ve missed, please don’t hesitate to provide feedback to this article. I don’t pretend to have used all of these tools; although I have used several of them extensively.

Editor / IDE

While this may sound like a weak beginning, the first, and in my opinion most important tool for any developer is a capable editor/IDE. This one tool can increase productivity by 50%, or even 100%; if you learn to take full advantage of its capabilities. After more than 15 years of using the same editor; I am still learning tricks to make it work harder for me. There are many editors available, from the free and ubiquitous emacs editors of UNIX fame, to the multi-platform Eclipse and SlickEdit, to the de-facto MS Visual Studio *. Personally my favorite IDE is Multi-Edit, which I’ve used for many years; so I will frame this discussion in terms of Multi-Edit’s features. Many of them will apply equally to your editor of choice. Syntax highlighting is a great tool when used properly; and Multi-Edit supports over 30 different entities each with unique colors. This includes 4 keyword types (that you can fully define) and 8 different comment types (4 multi-line and 4 single-line). I use this for some very powerful visual queing. For example, I define a restricted set of keywords as those that may be extensions or otherwise non-portable, so they get a hideous day-glow red color. Special comments for Doxygen and preprocessor directives can also get unique colorations. Templates and macros allow for nearly automatic coding in precisely the correct style for the organization; and for any language they work in. A good tags facility allows for auto-completion, function signature tooltips, and hyper-linking to any function, macro, or global definition. Integrated Version-control support allows on the fly checkin or checkout. Finally, customizable compiler integration provides a final compilation check, and an added bonus: static analysis tools such as PC-Lint and SPLint can be easily treated compiler allowing one-button static checks allowing one to quickly check the integrity of code at any time; and jump to errors instantly, just as you can during build with any IDE linked compiler. It performs multi-file search and replace using regular expressions; and has fully programmable keyboard-mappings, keyboard-record macros, hex-editing, random-access and stack-based bookmarks, code-folding, columnar cut-n-paste, and a very powerful macro language. Multi-Edit also comes with built-in support for a huge number of languages and compilers (and extensibility for others), as well as PolyStyle, a top-quality code formatter ( a.k.a. a “pretty-printer”). As I stated earlier, these are features that I make use of, and would find them beneficial in any high-quality IDE. They will be examined more thoroughly in one or more future articles. For all of its merits, I will concede that Multi-Edit does have a few shortcomings. It is only available for MS-Windows platforms, yet it only supports text-based editing. It has no GUI-element support, and no graphics editor included; and it does not support WYSIWYG editing of HTML.

Code Analyzers

One of the most useful static analyzers you can apply is the compiler for your project. In this context it should be run using strict ANSI/ISO standards, assuming that is supported. This should always be a requirement for high-integrity code, because other analyzers will choke on non-ANSI/ISO code. So; if you are adhering to the standard, and using other analyzers, why should you use the compiler as an adjunct analyzer? Because the differences in standards interpretation, as well as implementation-defined features may allow the compiler to flag errors that the other analyzers miss. In a recent project I was using two analyzers plus the compiler, and in one section of particularly complex code, each reported one or more errors that was not detected by either of the others. While I was initially able to correct all of the defects; after some consideration, I took the hint and refactored that section of code to greatly simplify the logic. The compiler often provides another benefit, especially to the embedded developer. Upon compile many can be configured to give compiled attributes such as RAM usage, stack usage, and/or ROM usage. I find it useful to collect this information into a spreadsheet so that I have early indication as to limits I may be approaching; and areas to consider for re-factoring.

Gimpel PC-Lint is probably the best known static analysis tool in the Windows/DOS world. PC-Lint is a highly evolved descendent of the original UNIX Lint. For those who don’t live in the Windows-world, Gimpel Software has a source-code distribution they have dubbed FlexeLint, that can be put on any system with a C compiler. PC-Lint is a truly powerful tool, but with so many configuration options that it can be quite overwhelming. It can analyze C or C++ with equal alacrity. It can enforce a large number of coding standard items; as well as potential defects and risky code constructs. It even has shortcut configuration flags that will enforce conformance to popular coding guides such as Scott Meyer’s Effective C++. To see a list of all of the errors which can be detected click here. They also have a pretty cool on-line demonstration of the tool at this url. There are things I don’t like about this tool, but in my opinion the pros far outweigh the cons; and until something better comes along, I think this tool should be in the toolbox of every C or C++ developer.

SPLint (short for Secure Programming Lint) is an extremely powerful static analyzer. It was designed explicitly for analyzing secure applications, and many of the analysis attributes cross over quite well to the safety-critical realm. In some ways, it is similar to PC-Lint. It can find risky, potentially defective, or ambiguous coding constructs. It does not find many of the style issues that PC-Lint is capable of; however it does bring with it some substantial benefits. Notably, SPLint supports code annotations that can be used to allow (or to insist upon) comment-enclosed elements that provide more design information in the code. This information allows the analyzer to detect places where the code deviates from the developer’s stated intent. The tool also enforces a much stronger typing model than traditional C. This capability does for C very much the same as SPARKAda does for Ada; which, for the unitiated, is a VERY big step toward safer code. While the annotations are entirely optional, they can be used to drive a much more complete analysis. The greatest weakness of SPLint is that it only supports C. It cannot parse C++. SPLint is maintained by University of Virginia’s Center for Secure Programming, and is freely downloadable as open-source software. On a final note, because SPLint began its life as a tool for checking code against Larch specifications, it retains this capability; and so could also be used in support of a formal methods approach. I’ve come to believe that for straight C code SPLint and PC-Lint are complementary, and there are solid reasons to use both together; but of the two, I find SPLint to be substantially more powerful as a tool for high-integrity or safety-critical software.

PolySpace, recently acquired by MathWorks, is a different kind of analyzer. It performs a pseudo-dynamic analysis using a technique called abstract interpretation. What is different about it is not simply that it can find defects; but that it can prove the absence of certain common classes of errors. While it does have a MISRA checker available that may eliminate the need for another tool, the power in this analyzer is the checks that it performs that that no ordinary static analyzer can reliably handle. The result of a PolySpace analysis shows green where the tool has proven the absence of any defect; red where a defect is certain; and orange where an issue is either possible, or cannot be disproven. The proof aspect is the one that should be of most interest to developers in the safety-critical realm. Since the results are mathematically proven, this tool can be used with confidence in a safety-critical environment. A DO-178B qualification package is available; and according to a source within the company, PolySpace recently received SIL-4 certification under IEC-61508. PolySpace is available for Ada, and for C and C++. One cautionary note; while C is almost a pure subset of C++, there are semantic differences. Code that has been proven safe for use when compiled as C, may not be safe when compiled as C++; and visa-versa. When analyzing this code, be sure you are using the language semantics under which the code will be compiled. While it cannot substitute for requirements-based testing, when properly integrated into the development process, it is quite possible that PolySpace could relieve the organization of more than 75% of the robustness testing that would otherwise be required of a safety-critical system. For each language package, PolySpace is available in two versions: A Developer version (a.k.a. desktop) version, which will only analyze single-threaded code of limited size; and a Verifier version (a.k.a. server), which does not impose these limits. While the less expensive Developer version provides a nice introduction to the capabilities of the tool; the Verifier Version will normally provide substantial benefit on all but the simplest applications.

LDRA Testbed is widely used in aerospace environments. This is probably because a qualification package has long been available for verification under DO-178B. This is actually a multi-faceted tool, including a static code analyzer also includes MISRA C checking, code metrics, and statistics; and a code coverage tool that also performs branch coverage and MC/DC analysis, as required for Level-A code under DO-178B. It has been a few years since I last used this tool, and there can be no denying that it provides powerful analysis capabilities. Unfortunately, I found that for the code I was working with, the most useful static analysis rules yielded so many false positives that the results were almost useless. Results of the instrumented code coverage and MC/DC analysis, on the other hand, were very good.

I have no direct experience with Coverity Prevent; but it has attained a excellent reputation in software analysis. Coverity is in the class of analyzers known as model checkers. Model checking is in some ways similar to abstract interpretation; but it is somewhat less developed in its ability to prove the absense or presence of errors. Coverity is very well respected for its ability to find errors overlooked by other analyzers. My only reluctance about this product come from papers I’ve read, written by its lead developer. Some of these papers indicate that the tool intentionally filters to limit the number of issues it will display. As issues are fixed, more should issues should be displayed; but I have to wonder about issues that are “analyzed away” as false positives. If these occur in high numbers, could that cause other issues to remain forever masked? This is a cautionary question. I will withhold my final verdict until I have the opportunity to actually try the tool out.

Testwell CMT++ and CMTJava are code metrics tools for C/C++ and Java, respectively. They provide function-by-function metrics, with configurable alarm levels. The tools measure McCabe Complexity, LOC counts, Code Volume, Comment Density, and based on its measures, it even make defect density predictions on the analyzed code. Testwell also makes a code coverage tool, CTC++, that performs full coverage analysis including MC/DC. I am not sure at this time whether a DO-178B qualification package is available.

LOCMetrics is a freeware tool which performs limited code metrics, for a few popular languages. It performs a number of LOC counts, and McCabe complexity; but the biggest limitation is that the metrics are file-by-file, not function-by-function. Still, it is a very nice tool for the price.

While I have not worked with the tools from Programming Research; I have seen sample output of what they can do. They appear to have the best features of PC-Lint and the various metrics programs; with even more configurability. Higher priced than some of the other tools, but based on the capabilities claimed, these tools definitely warrant a look if you happen to be putting together a tool suite for a safety-critical project.

Version Control

There comes a time in every developers life when he must use a version control system. There are many to choose from, and I’ve used quite a few. However, I never found version control truly easy to use until I discovered the combination of Subversion and TortoiseSVN. Subversion is the popular open-source version control system that was specifically designed to be a better alternative to the venerable CVS. Subversion simply does almost everything CVS does, but better. I liked Subversion, but I wanted to find a good GUI front-end for it; but I really couldn’t find any I was happy with… until I found TortoiseSVN. TortoiseSVN is not a GUI, instead it is a Windows shell extension that makes versioning part of Windows Explorer. Without even opening a separate program I perform pretty much any versioning operation: check-in, check-out, branching, tagging, merging, diffing, etc.; all within the context of normal MS-Windows navigation. It just couldn’t get any easier. Now I version all of my “living documents”. It has become far more than a programming tool for me; I use it for everything. Just ask me what my resume looked like on any given date, and I can show you. Now that’s a power tool.

Documentation and Manual Analysis Aids

Quite simply, Doxygen does for C and C++ what JavaDOC does for Java, and then some. It allows design documentation to be extracted and/or generated from the source code; and written to any of several formats including LaTeX, RTF, and HTML. It can generate dependency graphs, caller graphs, callee graphs, and in addition to the dozens of available annotations, you can even create your own; for example: @satisfies (for requirements trace tags), @modifies and @uses (for data flow documentation), and @safety (for safety information). This can be a very powerful tool, and it is free.

Source Navigator is an open-source tool brought to you by RedHat. It provides a very efficient means of searching or hypertexting through code. While it provides some editing capability for the files being analyzed, it is not a full-featured editor. Its strength, as the name would suggest is navigation of the code. Nearly the same result can be gained via Doxygen’s HTML, however Source Navigator provides excellent search capability, and is more like an editor in look and feel. A good editor may alleviate the need for Source Navigator. I found that it provided no real advantage over Multi-Edit; still it is an excellent tool.

Finally, I have heard nothing but good about Understand, though I haven’t had the pleasure of using it. It was hard to determine how to classify it, since the website proclaims that “Understand for C++ is a reverse engineering, documentation and metrics tool for C and C++ source code”. Hmmm, I don’t exactly have that as a single category. It combines the metrics of CMT++, generates UML to represent the source code, automatically producing design documentation in the spirit of Doxygen, and provides the code navigation and hyperlinking of Source Navigator; in a very reasonably priced package.

Well, there you have it. A list of tools ranging from pretty good to superb. Anyone care to share some of there favorites with us?

About Max H:
Max is a father, a husband, and a man of many interests. He is also a consulting software architect with over 3 decades experience in the design and implementation of complex software. View his Linked-In profile at