Theory

[Theory] Static Program Analysis

2020-03-16  本文已影响0人  何幻

0. Preface

Static program analysis is the art of reasoning about the behavior of computer programs without actually running them. This is useful not only in optimizing compilers for producing efficient code but also for automatic error detection and other tools that can help programmers.

As known from Turing and Rice, all nontrivial properties of the behavior of programs written in common programming languages are mathematically undecidable.

This means that automated reasoning of software generally must involve approximation.

One of the key challenges when developing such analyses is how to ensure high precision and efficiency to be practically useful.

We take a constraint-based approach to static analysis where suitable constraint systems conceptually divide the analysis task into a front-end that generates constraints from program code and a back-end that solves the constraints to produce the analysis results.

The analyses that we cover are expressed using different kinds of constraint systems, each with their own constraint solvers:


1. Introduction

Dijkstra: “Program testing can be used to show the presence of bugs, but never to show their absence.

A program analyzer is such a program that takes other programs as input and decides whether or not they have a certain property.

Rice’s theorem is a general result from 1953 which informally states that all interesting questions about the behavior of programs (written in Turing-complete programming languages ) are undecidable.

At first, this seems like a discouraging result, however, this theoretical result does not prevent approximative answers. While it is impossible to build an analysis that would correctly decide a property for any analyzed program, it is often possible to build analysis tools that give useful answers for most realistic programs. As the ideal analyzer does not exist, there is always room for building more precise approximations (which is colloquially called the full employment theorem for static program analysis designers).

Program analyses that are used for optimizations typically require soundness. If given false information, the optimization may change the semantics of the program. Conversely, if given trivial information, then the optimization fails to do anything.

Although soundness is a laudable goal in analysis design, modern analyzers for real programming languages often cut corners by sacrificing soundness to obtain better precision and performance, for example when modeling reflection in Java.

It is impossible to build a static program analysis that can decide whether a given program may fail when executed. Moreover, this result holds even if the analysis is only required to work for programs that halt on all inputs. In other words, the halting problem is not the only obstacle; approximation is inevitably necessary.


2. A Tiny Imperative Programming Language

Abstract syntax trees (ASTs) as known from compiler construction provide a representation of programs that is suitable for flow-insensitive analyses, for example, type analysis, control flow analysis, and pointer analysis.

For flow-sensitive analysis, in particular dataflow analysis, where statement order matters it is more convenient to view the program as a control flow graph, which is a different representation of the program code.

A control flow graph (CFG) is a directed graph, in which nodes correspond to statements and edges represent possible flow of control.


3. Type Analysis

We resort to a conservative approximation: typability. A program is typable if it satisfies a collection of type constraints that is systematically derived, typically from the program AST.

The type constraints are constructed in such a way that the above requirements are guaranteed to hold during execution, but the converse is not true. Thus, our type checker will be conservative and reject some programs that in fact will not violate any requirements during execution.

For a given program we generate a constraint system and define the program to be typable when the constraints are solvable.

This class of constraints can be efficiently solved using a unification algorithm.

If solutions exist, then they can be computed in almost linear time using a unification algorithm for regular terms as explained below. Since the constraints may also be extracted in linear time, the whole type analysis is quite efficient.

The unification algorithm we use is based on the familiar union-find data structure (also called a disjoint-set data structure) from 1964 for representing and manipulating equivalence relations。

The type analysis is of course only approximate, which means that certain programs will be unfairly rejected.


4. Lattice Theory

We circumvent undecidability by introducing approximation. That is, the analysis must be prepared to handle uncertain information, in this case situations where it does not know the sign of some expression, so we add a special abstract value (T) representing “don’t know”.

Due to undecidability, imperfect precision is inevitable.

The least upper bound operation plays an important role in program analysis.

As we shall see, we use least upper bound when combining abstract information from multiple sources, for example when control flow merges after the branches of if statements.

Also notice that it is important for the analysis of this simple program that the order of statements is taken into account, which is called flow-sensitive analysis.

As the lattice order when used in program analysis represents precision of information, the intuition of monotonicity is that “more precise input does not result in less precise output”.

A solution to an equation system provides a value from L for each variable such that all equations are satisfied.

This clearly shows that a solution to an equation system is the same as a fixed-point of its functions. As we aim for the most precise solutions, we want least fixed-points.

The fixed-point theorem
In a lattice L with finite height, every monotone function f : L → L has a unique least fixed-point denoted fix(f) defined as

fix(f) = \bigvee_{i\geqslant 0} f^i(\perp)

The theorem is a powerful result: It tells us not only that equation systems over lattices always have solutions, provided that the lattices have finite height and the constraint functions are monotone, but also that uniquely most precise solutions always exist. Furthermore, the careful reader may have noticed that the theorem provides an algorithm for computing the least fixed-point.

The least fixed point is the most precise possible solution to the equation system, but the equation system is (for a sound analysis) merely a conservative approximation of the actual program behavior. This means that the semantically most precise possible (while still correct) answer is generally below the least fixed point in the lattice.


Reference

Static Program Analysis

上一篇下一篇

猜你喜欢

热点阅读