Author Archive

Guest Article: Static Analysis in Medical Device Software (Part 3) — Formal specifications

Sunday, March 7th, 2010

Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. Also see Part 1 and Part 2.

In May 2009, I alluded to a three-part blog post on the general topic of static analysis in medical device software. The ideas I hope will emerge from this third and last part are:

  1. Formal specifications are good,
  2. Partial formal specifications are underrated, and
  3. One should never commit in advance to writing anything, however easy it seems it will be at the time.

Going back to point one, a "functional specification" is a description of what a system/subsystem does. I really mostly intend to talk about formal versions of functional specifications. This only means that the description is written in a machine-parsable language with an unambiguous grammar. The separation between formal and informal specifications is not always clear-cut, but this third blog entry will try to convince you of the advantages of specifications that can be handled mechanically.

Near the bottom of the V development cycle, "subsystem" often means software: a function, or a small tree of functions. A functional specification is a description of what a function (respectively, a tree of functions) does and does not (the time they take to execute, for instance, is usually not considered part of the functional specification, although whether they terminate at all can belong in it. It is only a matter of convention). The Wikipedia page on "Design by Contract" lists the following as making up a function contract, and while the term is loaded (it may evoke Eiffel or run-time assertion checking, which are not specifically the topic here), the three bullet points below are a good categorization of what functional specifications are about:

  • What does the function expect, what rules should the caller obey at the time of calling it?
  • What does the function guarantee, what is the caller allowed to expect from the function's results?
  • What properties does the function maintain?

I am tempted to point out that invariants maintained by a function can be encoded in terms of things that the function expects and things that the function guarantees, but this is exactly the kind of hair-splitting that I am resolved to give up on.

The English sentence "when called, this function may modify the global variables G and H, and no other" is almost unambiguous and rigorous — assuming that we leave aliasing out of the picture for the moment. Note that while technically something that the function ensures on return (it ensures that for any variable other than G or H, the value of the variable after the call is the same as its value before the call), this property can be thought of more intuitively as something that the function maintains.

The enthusiastic specifier may like the sentence "this function may modify the global variables G and H, and no other" so much that he may start copy-pasting the boilerplate part from one function to another. Why should he take the risk to introduce an ambiguity accidentally? Re-writing from memory may lead him to forget the "may" auxiliary, when he does not intend to guarantee that the function will overwrite G and H each time it is called. Like for contracts of a more legal nature, copy-pasting is the way to go. The boilerplate may also include jargon that make it impossible to understand by someone who is not from the field, or even from the company, whence the specifications originate. Ordinary words may be used with a precise domain-specific meaning. All reasons not to try to paraphrase and to reuse the specification template verbatim.

The hypothetical specifier may particularly appreciate that the specification above is not only carefully worded but also that a list of possibly modified globals is part of any wholesome function specification. He may — rightly, in my humble opinion — endeavor to use it for all the functions he has to specify near the end of the descending branch of the V cycle. This is when he is ripe for the introduction of a formal syntax for functional specifications. According to Wikipedia, Robert Recorde introduced the equal sign "to auoide the tediouſe repetition of [...] woordes", and the sentence above is a tedious repetition begging for a sign of its own to replace it. When the constructs of the formal language are well-chosen, the readability is improved, instead of being diminished.

A natural idea for to express the properties that make up a function contract is to use the same language as for the implementation. Being a programming language, it is suitably formal; the specifier, even if he is not the programmer, is presumably already familiar with it; and the compiler can transform these properties into executable code that checks that preconditions are properly assured by callers, and that the function does its own part by returning results that satisfy its postconditions. This choice can be recognized in run-time assertion checking, and in Test-Driven Development (in Test-Driven Development, unit tests and expected results are written before the function's implementation and are considered part of the specification).

Still, the choice of the programming language as the specification language has the disadvantages of its advantages: it is a programming language; its constructs are optimized for translation to executable code, with intent of describing algorithms. For instance, the "no global variable other than G and H is modified" idiom, as expressed in C, is a horrible way to specify a C function. Surely even the most rabid TDD proponent would not suggest it for a function that belongs in the same C file as a thousand global variable definitions.

A dedicated specification language has the freedom to offer exactly the constructs that make it pleasant to write specifications in it. This means directly including constructs for commonly recurring properties, but also providing the building blocks that make it possible to define new constructs for advanced specifications. So a good specification language has much in common with a programming language.

A dedicated specification language may for instance offer

as a synonym for the boilerplate function specification above, and while this syntax may seem wordy and redundant to the seat-of-the-pants programmer, I hope to have convinced you that in the context of structured development, it fares well in the comparison with the alternatives. Functional specifications give static analyzers that understand them something to chew on, instead of having to limit themselves to the absence of run-time errors. This especially applies to correct static analyzers as emphasized in part 2 of this oversize blog post.

Third parties that contact us often are focused on using static analysis tools to do things they weren't doing before. It is a natural expectation that a new tool will allow you to do something new, but a more accurate description of our activity is that we aim to allow to do the same specification and verification that people are already doing (for critical systems), better. In particular, people who discover tools such as Frama-C/Jessie or other analysis tools based on Hoare-Floyd precondition computations often think these tools are intended for verifying, and can only be useful for verifying, complete specifications.

A complete specification for a function is a specification where all the properties expected for the function's behavior have been expressed formally as a contract. In some cases, there is only one function (in the mathematical sense) that satisfies the complete specification. This does not prevent several implementations to exist for this unique mathematical function. More importantly, it is nice to be able to check that the C function being verified is one of them!

Complete specifications can be long and tedious to write. In the same way that a snippet of code can be shorter than the explanation of what it does and why it works, a complete specification can sometimes be longer than its implementation. And it is often pointed out that these specifications can be so large that once written, it would be too difficult to convince oneself that they do not themselves contain a bug.

But just because we are providing a language that would allow you to write complete specifications does not mean that you have to. It is perfectly fine to write minimal formal specifications accompanied with informal descriptions. To be useful, the tools we are proposing only need to do better than testing (the most widely used verification technique at this level). Informal specifications traditionally used as the basis for tests are not complete either. And there may be bugs in both the informal specification or in its translation into test cases.

If anything, the current informal specifications leave out more details and contain more bugs, because they are not machine-checked in any way. The static analyzer can help find bugs in a specification in the same way that a good compiler's sanity checks and warnings help avoid the stupidest bugs in a program.

In particular, because they are written in a dedicated specification language, formal specifications have better composition properties than, say, C functions. A bug in the specification of one function is usually impossible to overlook when trying to use said specification in the verification of the function's callers. Taking an example from the tutorial/library (authored by our colleagues at the applied research institute Fraunhofer FIRST) ACSL by example, the specification of the max_element function is quite long and a bug in this specification may be hard to detect. The function max_element finds the index of the maximum element in an array of integers. The formal version of this specification is complicated by the fact that it specifies that if there are several maximum elements, the function returns the first one.

Next in the document a function max_seq for returning the value of the maximum element in an array is defined. The implementation is straightforward:

The verification of max_seq builds on the specification for max_element. This provides additional confidence: the fact that max_seq was verified successfully makes a bug in the specification of max_element unlikely. Not only that, but if the (simpler, easier to trust) specification for max_seq were the intended high-level property to verify, it wouldn't matter that the low-level specification for max_element was not exactly what the specifier intended (say, if there was an error in the specification of the index returned in case of ties): the complete system still has no choice but to behave as intended in the high-level specification. Unlike a compiler that lets you put together functions with incompatible expectations, the proof system always ensures that the contract used at the call point is the same as the contract that the called function was proved to satisfy.

And this concludes what I have to say on the subject of software verification. The first two parts were rather specific to C, and would only apply to embedded code in medical devices. This last part is more generic — in fact, it is easier to adapt the idea of functional specifications for static verification to high-level languages such as C# or Java than to C. Microsoft is pushing for the adoption of its own proposal in this respect, Code Contracts. Tools are provided for the static verification of these contracts in the premium variants of Visual Studio 2010. And this is a good time to link to this video. Functional specifications are a high-level and versatile tool, and can help with the informational aspects of medical software as well as for the embedded side of things. I would like to thank again my host Robert Nadler, my colleague Virgile Prevosto and department supervisor Fabrice Derepas for their remarks, and twitter user rgrig for the video link.

Guest Article: Static Analysis in Medical Device Software (Part 2) — Methodology

Thursday, June 4th, 2009

Pascal Cuoq at Frama-C continues his discussion of static analysis for medical device software. This is part 2 of 3. Part 1 is here.

In the second part of this article I write about methodology, where tools and engineering come together to produce software that you can entrust with lives. I do not avoid talking about the work my colleagues and I do, but I do mention the work of others too.

The layman often assumes that it must be impossible to make software that works as intended. It is a natural conclusion to draw from one's experience with personal computers, mobile phones, car on-board computers and vending machines. The layman's opinion is biased because for most people, embedded software is the means, rather than an end, and therefore is never noticed when it works. For instance, my own digital reflex camera contains a fair amount of software. Still, I have never observed it to deviate from the behavior described in the thick manual that came with it -- there are some particularities that I would call functional bugs, but since the manual describes them at length, as the old joke goes, they are features. Software that works is not impossible. It is only that, as the regretted Douglas Adams would put it, software that doesn't work is slightly cheaper. Moderately large software systems that work well enough not to be noticed can be produced. It is "only" a matter of having simple rules that enforce readability of the developed code by people who have not written it, and an appropriately sized budget for code reviews and quality assurance (usually testing, but bug-finding software analyzers are used here too, and they would be used more if their strengths were not so widely misunderstood). This statement does not include very large codebases and concurrent systems, that we still aren't very good at building reliably but keep trying anyway.

The specification for my digital camera is the thick manual, although there are also internal specifications for sub-components of the camera's software that I, as an end user, do not get to see. The internal specifications naturally tend to be more technically detailed as they deal with smaller and smaller sub-components. As components are assembled, it becomes possible to check that the corresponding specification for the sub-assembly is satisfied. This method is called the V-Model of software development, although one wonders why it needs such a high-sounding name: almost every manufactured physical object has been built from sub-components with pre-determined specifications since time immemorial.

This has nothing to do with the production of critical code. Or rather, the two components above, development according to an enforced development standard, and quality assurance (debugging), remain but become a small part of the picture in the development of critical code. Two additional components, at least as large as the first two, are the certification and the authority.

Certification is the additional, reflexive examination of the development, verification (i.e. the software conforms to specification) and validation (i.e. the specification corresponds to the actual need) processes.

One difference between software and hardware is that it is harder to make sure that software satisfies the original requirements. This was made very clear in the article that prompted this series of blog posts. And this is why critical software particularly needs certification. Certification is not so much the testing of the software against the specification (this is called "debugging" and it's not specific to critical software) but a cohesive list of arguments leading to the conclusion that whatever testing has been done was sufficient to find any possible flaw with the expected confidence. A certification file does not state "we used this development tool and we ran these 1000 tests for this component" but "we used this development tool, and here are the reasons why we think it's acceptable. Here are the reasons why we think that these 1000 tests are sufficient to ensure that this component works as expected (and, incidentally, here are the tests and their results)". As you would expect, when a static analyzer is used, the certification file does not read "Here is the tool we used and the results we obtained" but "Here is the tool we used. Here is how we established that this tool could reliably be used to ensure this aspect of the requirements, (and incidentally, here are the results we obtained)".

The authority defines the expectations for the certification, and studies the certification file once submitted. In the end, it all comes down to convincing the competent, financially disinterested humans who check the certification file that all the necessary steps have been taken to ensure the safety of the critical device.

We now arrive to the first statement I disagree with from the article, that in static analysis of software, "achieving a 100% recall rate is rare, if not impossible, and may only be possible at the cost of a very high number of false positives"

First, a 100% recall rate corresponds to the absence of false negatives, which is a perfectly achievable objective. Static analyzers with this property are called "correct" (or "sound"). These adjectives have meaning only in a context where it is clear what bugs are being looked for and what assumptions are made to this end. Assuming this context is unambiguous, they mean that as long as the tool's assumptions are respected, no bug in the analyzed program is left undetected.

Two examples of commercially available static analyzers that have been designed from the ground up to have no false negatives are PolySpace, now distributed by The MathWorks, and Astrée, soon to be distributed by AbsInt. Allow me, however, to translate the sentence "Astrée is capable of producing exactly zero false alarms" from that web page: "false alarms" mean "false positives". Astrée, by design, does not have any false negatives. If it failed to notice a possible run-time error, it would be a bug which, I am sure, would be promptly fixed. The "no false positives" claim only means that it does not have any false positives on some pre-determined representative pieces of software. It is certainly not a guarantee, since, as stated earlier, it is a mathematical impossibility for a static analyzer to reach a verdict for any analyzed program with neither false positives nor false negatives. The best way to determine the number of false positives you can expect Astrée to produce for your code is, as with any other analyzer, to try it.

Now, except in the magical world of marketing, it is indeed true that the less false negatives are allowed in the results, the more false positives can be expected to be found. This dilemma is the same that occurs every time something can only imperfectly be detected. Considering the target readership for the blog of my kind host, I do not think that I need to harp on this. But, if the medical test analogy does not work for you, consider the example of the shoestring eyelets on my shoes, which cause the metal detectors in international airports to ring almost every time (false positive) because it has become unacceptable in the last few years to have the slightest risk of a weapon going undetected through the controls (false negative).

Every system has its assumptions: in the case of the airport detector, one is that a weapon is assumed to include some metal. This is a good opportunity to introduce in passing another distinction: "safety" works against the physical world (failures, birds flying into reactors, ...). "Security" works against conscious opponents who are actively trying to use your assumptions to their advantage. This distinction can be applied to software analysis but it is more general than that. Still, even if what you are doing is categorized as "safety", if it's critical, you have to be aware of your assumptions. So the two disciplines are not always very different in philosophy, although they often aim at different objectives.

Thanks to a number of recent advances on the theoretical side, as well as the increase in the computational power available in the workstations where the analysis takes place, you can expect the number of false positives given by a correct static analyzer on your embedded code to be contained. It would be cautious to disbelieve claims that there won't be any.

In addition to the above two static analyzers, I can mention Caveat, another static analyzer without false negatives that has been developed in the laboratory where I work. Caveat is commercially available, although we do not advertise it because it is targeted to very high criticity software that does not concern many (we consider it to be most useful for code with a criticity comparable to level A, the highest in the DO-178B avionics certification standard). Since I am in a mood to take single sentences from web pages and comment on them, please allow me to do it once more: the sentence "[Using Caveat, Airbus France's] goal is to detect errors as soon as possible in the development cycle, and not to prove the software" was written at a time when Airbus France was indeed experimenting with Caveat as a R&D project. This sentence is now completely obsolete. Caveat has been officially used for part of the verification of part of the software of the Airbus A380 — that is, precisely, to establish beyond doubt certain properties about the analysed source code, and in substitution to the unit tests whose role would have been to establish these properties in a more traditional process. As the DO-178B standard mandates, Caveat has been qualified by Airbus as a verification tool to be used for the certification of this particular software.

Also from this laboratory, there is Frama-C, which is available too since it's Open Source. Frama-C is a research prototype to which the experimentation of new ideas has shifted (while Caveat is still being maintained for Airbus and any industrial user who requires it). Frama-C is more of a framework for static analyzers than a static analyzer per se. The analyzers that have been developed in Frama-C so far rely on various techniques but they are all without false negatives. Some of these analyzers are now reliable enough to be considered for R&D experimentation. Caveat was a research prototype too at the time Airbus decided to use it in production and to make it part of its certification process. Whether or not the tool you intend to use comes in a cardboard box, you will have to explain the measures you took to ensure that it was the right tool to use for what you were using it for. What it is called matters less than the measures you took.

The second statement from the article I disagree with is that "static analysis is intended to supplement and improve the effectiveness of existing best practices in testing. It should not be thought of as a substitute for device developers' current testing activities". Of course, if you are using a bug-finding static analyzer with false negatives, you will have a hard time justifying why you removed a single test from those you would have done without the analyzer. Such a tool is most useful in the debugging phase, to identify and remove bugs as quickly as possible, not in the verification phase of a process subject to certification. But when Airbus used Caveat for the A380, it was precisely in substitution to existing unit tests. The fact that Caveat is designed not have false negatives was one of the arguments in the validation of Caveat as a verification tool to establish the properties that were previously guaranteed by these unit tests, with the required confidence.

Another way to look at this question is the following: bug-finding static analyzers (that have false negatives) have the potential to be better for debugging than sound analyzers (without false negatives) because by accepting to emit false negatives, they can reduce the number of false positives (and save the user time). This debugging phase can be, and often is, lightly covered in the certification because it is later followed by verification, which is the important second check. In a certification-covered verification process, the bugs have already been ironed out and the engineers are not trying to find more bugs but to prove that there aren't any. Any positive is going to be a false positive in this context, even if it comes from the most cautious heuristic tool (a tool that makes a lot of effort to warn only when it is quite certain that a problem exists). On the other hand, during the certified verification process, a heuristic tool's contribution to the bottom line is harder to quantify, since the objective of verification is not to find bugs but to establish that there aren't any.

The statement that there aren't any bugs left when certification starts may look like an exaggeration, but it isn't. If the certification requirements are stringent, changing any part of the code (to fix a bug) means starting the verification from scratch. This is a protection against, among other things, the dangers of C that were alluded to in the first part of this article. If you find bugs at that stage, you are not doing it optimally from the economic point of view (and you are starting afresh a heavy, certification-covered verification process in which, hopefully for you, you will not discover any new bug this time).

I would like to acknowledge the careful editing of my host, the suggestions of my colleague Virgile Prevosto in writing part 1, and the remarks of both my supervisor Benjamin Monate and David Delmas (Airbus France) concerning the present part 2 of this article. The third and last part of this series will be on the topic of formal functional specifications, one of the under-used new tools that have a contribution to make in the verification of critical software. In conclusion, here is a quoted statistic in the style, if not the spirit, of Douglas Coupland's Generation X:

Number of human lives whose loss has been attributed to software failure of a civil airplane: 0

Guest Article: Static Analysis in Medical Device Software (Part 1) — The Traps of C

Friday, May 15th, 2009

Any software controlled device that is attached to a human presents unique and potentially life threatening risks.  A recent article on the use of static analysis for medical device software prompted Pascal Cuoq at Frama-C to share his thoughts on the subject. This is part 1 of 3.

The article Diagnosing Medical Device Software Defects Using Static Analysis gives an interesting overview of the applicability of static analysis to embedded medical software. I have some experience in the field of formal methods (including static analysis of programs), and absolutely none at all in the medical domain. I can see how it would be desirable to treat software involved at any stage of a medical procedure as critical, and coincidentally, producing tools for managing critical software has been my main occupation for the last five years. This blog post constitute the first part of what I have to say on the subject, and I hope someone finds it useful.

As the article states, in the development of medical software, as in many other embedded applications, C and C++ are used predominantly, for better or for worse. The "worse" part is an extensive list of subtle and less subtle pitfalls that seem to lay in each of these two languages' corner.

The most obvious perils can be avoided by restricting the programmer to a safer subset of the language -- especially if it is possible to recognize syntactically when a program has been written entirely in the desired subset. MISRA C, for instance, defines a set of rules, most of them syntactic, that help avoid the obvious mistakes in C. But only a minority of C's pitfalls can be eliminated so easily. A good sign that coding style standards are no silver bullet is that there exist so many. Any fool can invent theirs, and some have. The returns of mandating more and more coding rules diminish rapidly, to the point that overdone recommendations found in the wild contradict each other, or in the worst case, contradict common sense.

Even written according to a reasonable development standard, a program may contain bugs susceptible to result in run-time errors. Worse, such a bug may, in some executions, fail to produce any noticeable change, and in other executions crash the software. This lack of reproducibility means that a test may fail to reveal the problem, even if the problematic input vector is used.

A C program may yet hide other dangerous behaviors. The ISO 9899:1999 C standard, the bible for C compilers implementers and C analyzers implementers alike, distinguishes "undefined", "unspecified", and "implementation-defined" behaviors. Undefined behaviors correspond roughly to the run-time errors mentioned above. The program may do anything if one of these occurs, because it is not defined by the standard what it should do. A single undefined construct may cause the rest of the program to behave erratically in apparently unrelated ways. Proverbially, a standard-compliant compiler may generate a program that causes the device to catch fire when a division by zero happens.

Implementation-defined behaviors represent choices that are not imposed by the standard, but that have to be made by the compiler once and for all. In embedded software, it is not a viable goal to avoid implementation-defined constructions: the software needs to use them to interface with the hardware. Additionally, size and speed constraints for embedded code often force the developer to use implementation-defined constructs even where standard constructs exist to do the same thing.

However, in the development of critical systems, the underlying architecture and compiler are known before software development starts. Some static analysis techniques lend themselves well to this kind of parameterization, and many available tools that provide advanced static analysis can be configured for the commonly available embedded processors and compilers. Provided that the tests are made with the same compiler and hardware as the final device, the existence of implementation-defined behaviors does not invalidate testing as a quality assurance method, either.

Unspecified behaviors are not treated as seriously as they should by many static analysis tools. That's because unlike undefined behaviors, they cannot set the device on fire. Still, they can cause different results from one compilation to the other, from one execution to the other, or even, when they occur inside a loop, from one iteration to the other. Like the trickiest of run-time errors, they lessen the value of tests because they are not guaranteed to be reproducible.

The "uninitialized variable" example in the list of undesirable behaviors in the article is in fact an example of unspecified behavior. In the following program, the local variable L has a value, it is only unknown which one.

Computing (L-L) in this example reliably give a result of zero.

Note: For the sake of brevity, people who work in static analysis have a tendency to reduce their examples to the shortest piece of code that exhibits the problem. In fact, in writing this blog post I realized I could write an entire other blog post on the deformation of language in practitioners of static analysis. Coming back to the subject at hand, of course, no programmer wants to compute zero by subtracting an uninitialized variable from itself. But a cryptographic random generator might for instance initialize its seed variable by mixing external random data with the uninitialized value, getting at least as much entropy as provided by the external source but perhaps more. The (L-L) example should be considered as representing this example and all other useful uses of uninitialized values.

Knowledge of the compilation process and lower-level considerations may be necessary in order to reliably predict what happens when uninitialized variables are used. If the local variable L was declared of type float, the actual bit sequence found in it at run-time could happen to represent IEEE 754's NaN or one of the infinities, in which case the result of (L-L) would be NaN.

Uninitialized variables, and more generally unspecified behaviors, are indeed less harmful than undefined behaviors. Some "good" uses for them are encountered from time to time. We argue that critical software should not exhibit any unspecified behavior at all. Uses of uninitialized variables can be excluded by a simple syntactic rule "all local variables should be initialized at declaration", or, if material constraints on the embedded code mean this price is too high to pay, with one of the numerous static analyzers that reliably detect any use of an uninitialized variable. Note that because of C's predominant use of pointers, it may be harder than it superficially appears to determine if a variable is actually used before being initialized or not; and this is even in ordinary programs.

There are other examples of unspecified behaviors not listed in the article, such as the comparison of addresses that are not inside a same aggregate object, or the comparison of an invalid address to NULL. I am in fact still omitting details here. See the carefully worded §6.5.8 in the standard for the actual conditions.

An example of the latter unspecified behavior is (p == NULL) where p contains an invalid address computed as t+12345678 (t being a char array with only 10000000 cells). This comparison may produce 1 when t happens to have been located at a specific address by the compiler, typically UINT_MAX-12345677. It also produces 0 in all other cases. If there is an erroneous behavior that manifests itself only when this condition produces 1, a battery of tests is very unlikely to uncover the bug, which may remain hidden until after the device has been widely deployed.

An example of comparison of addresses that are not in the same aggregate object is the comparison (p <= q), when p and q are pointers to memory blocks that have both been obtained by separate calls to the allocation function malloc. Again, the result of the comparison depends on uncontrolled factors. Assume such a condition made its way by accident in a critical function. The function may have been unit-tested exhaustively, but the unit tests may not have taken into account the previous sequence of bloc allocations and deallocations that results in one block being positioned before or after the other in the heap. A typical static analysis tool is smarter, and may consider both possibilities for the result of the condition, but we argue that in critical software, the fact that the result is unspecified should in itself be reported as an error.

Another failure mode for programs written in C or any other algorithmic language is the infinite loop. In embedded software, one is usually interested in an even stronger property than the absence of infinite loops, the verification of a predetermined bound on the execution time of a task. Detection of infinite loops is a famous example of undecidable problem. Undecidable problems are problems for which it is mathematically impossible to provide an algorithm that for any input (here, a program to analyze) eventually answers "yes" or "no". People moderately familiar with undecidability sometimes assume this means it is impossible to make a static analyzer that provides useful information on the termination of an analyzed program, but the theoretical limitation can be worked around by accepting a little imprecision (false negatives, or false positives, or both, in the diagnostic), or by allowing that the analyzer itself will, in some cases, not terminate.

The same people who recognize termination of the analyzed program as an undecidable property for which theory states that a perfect analyzer cannot be made, usually fail to recognize that finely recognizing run-time errors or unspecified behaviors are undecidable problems as well. For these questions, it is also mathematically impossible to build an analyzer that always terminates and emits neither false positives nor false negatives.

Computing the worse-case execution time is at least as hard as verifying termination, therefore it's undecidable too. That's for theory. In practice, there exist useful static analyzers that provide guaranteed worse case execution times for the execution of a piece of software. They achieve this by limiting the scope of the analysis, firstly, to the style of code that is common in embedded software, and secondly, to the one sub-task whose timing is important. This kind of analysis cannot be achieved using the source code alone. The existing analyzers all use the binary code of the task at some point, possibly in addition to the source code, a sample of the processor to be used in the device, or only an abstract description of the processor.

This was part one of the article, where I tried to provide a list of issues to look for in embedded software. In part two, I plan to talk about methodology. In part three, I will introduce formal specifications, and show what they can contribute to the issue of software verification.