Why Use Software Verification?

Software development often proves far more expensive than expected. Evidence indicates that the earlier a defect is discovered in development, the less impact it has on both the timescales and cost. Bugs discovered late in the development cycle send costs soaring and risk the integrity and safety of a system, especially if the software has been deployed.

Software development often proves far more expensive than expected. Evidence indicates that the earlier a defect is discovered in development, the less impact it has on both the timescales and cost. Bugs discovered late

The typical software development life-cycle follows the familiar waterfall process.

in the development cycle send costs soaring and risk the integrity and safety of a system, especially if the software has been deployed. Obviously, careful planning, organization, and a team with the correct skills all help.

Since its inception in the early 1970s, the sequential waterfall model has served as a framework for software development alternatives. In this model, each phase cascades to the next, which only starts when the defined goals for the previous phase are achieved.

In practice, earlier phases often need to be revisited as developers work iteratively and requirements come together as users test prototype versions of the system. Because of this iterative approach, it is even more important to apply suitable verification and validation (V&V) techniques at each stage and within each iteration.

Requirements

The first step or level in the waterfall model is developing system requirements. This step involves close collaboration between the ultimate user and the development team. There is much to gain by ensuring requirements are captured in full, are well understood, and are specified completely and unambiguously. Formal methods of tracking requirements are based on a mathematical approach to specification, development, and verification of software and hardware systems.

The derived class, IOFile, inherits attributes from both InputFile and OutputFile, which both inherit from File.

These formal methods can vary from using commonly accepted notation to the full formality of theorem proving or automated deduction—a method of proving mathematical theorems by a computer program. Although the cost of using formal methods often limits them to applications where a high level of software integrity is required, some degree of formal specification provides benefits for any software application.

Design

Traditionally, the design of large systems follows a top-down, functional decomposition approach. The system is broken down into subsystems, which pass data and control across defined interfaces. Subsystems normally comprise a number of program modules or units, and each module has a number of routines which perform distinct tasks.

With the advent of model-based design strategies, much verification can now be automated. Unit testing, previously applied only to code, can be conducted in simulations. Given a particular precondition and series of inputs, the outputs and post-conditions may be checked. The model can even be instrumented such that intermediate conditions can also be checked to ensure the correctness of different paths through the design. It is quite possible to achieve the correct results for the wrong reasons through coincidental correctness.

Implementation

Source code generally brings the first opportunity to apply sophisticated tools to verify and test an application, and it is also the point at which many of the defects are introduced. Poor programming practices and informal testing contribute to software that both fails to perform correctly and is difficult to understand and maintain. While many organizations use style guides to promote conformity and encourage greater care, it is only a small step towards the compliance striven for by developers of safety-critical systems.

Typically a programming standard contains a large number of rules and guidelines. However, a relatively new coding standard, titled “The Power of 10: Rules for Developing Safety-Critical Code,” has been devised by the Jet Propulsion Laboratory (JPL) and is restricted to only 10 verifiable coding rules. The theory is that a small, carefully selected rule set is more likely to be enforced and will still detect many of the causes leading to software defects.

The perceived quality of software under analysis classifies programming rules/guidelines by:

Portability: These rules highlight programming constructs that vary across different compilers.

Dependability: These rules expose unsafe code likely to impact performance or reliability.

Testability: These rules detect features that cause the testing to be more difficult.

Maintainability: These rules detect features difficult to understand that would impact updates or revisions such as redundancy and reuse of identifier names.

Complexity: These rules highlight complex code so that it can be simplified or greater caution should be exercised when modifying.

Style: These rules ensure that code follows the same style and adheres to recognized good programming practices.

Acceptance Testing

At the end of the implementation phase, software units are integrated and tested as sub-systems and as part of the full system. Once integration testing is complete and the product is ready for delivery to the customer, the final phase is acceptance testing.

Acceptance testing is a formal testing process conducted under the direction of the software users to determine if the operational software system meets their needs as defined by the requirements. Each acceptance test attempts to test the functionality as required by the user. These tests are different from unit tests in that unit tests are modeled and written by the developer of each module, while the acceptance test is modeled and possibly even written with or by the customer.

Author Information

Paul Humphreys is a software engineer with LDRA Ltd. responsible for the ongoing enhancement of the LDRA static analyzer. Contact him at [email protected] .