While "soundness" and "completeness" are well-defined properties of deductive systems, using such formal methods for analysis of real-world systems requires their formalisation.
However, the formalisation of any real-world system will inherently introduce abstraction/assumptions. The question really is about which assumptions one should make when formalising software.<p>For example, when reasoning about software, one often abstracts from hardware details like battery-backed memory that will presist through a crash. There are also many "reasonable" assumptions at software-level, like `malloc` never failing, that many verifiers make in order to give more helpful results than "functional correctness is not guaranteed when using malloc".<p>The discussed paper is especially interesting for practitioners, as it tries to state all the assumptions made in the design of DR.CHECKER's static analysis. While this results in abstracting from certain realisable behaviours, e.g. "it treats each entry point independently" and misses state-dependent bugs, stating these assumptions helps with judging/estimating the quality of the results.<p>The paper's authors present a static analysis whose reasoning is sound w.r.t. stated assumptions. This is what they call "soundy". The actual post/article raises the important issue that it would be helpful to define what a sound static analysis of software is, that is agree on the assumptions/abstractions made.
However, picking a "reasonable" set of assumptions for a "sufficiently sound" static analysis is admittedly a difficult task, and of course not provided (yet).<p>It seems to me that finding a common set of assumptions is less of an issue for competitive software model checkers though, as the benchmarks used in the competition on software verification (SV-COMP) implicitly prescribe which aspects have to be modelled "properly". Couldn't a similar competition for static analysis give rise to a set of common assumptions for static analysis, too?