Interprocedural optimization

From Wikipedia, the free encyclopedia

The objective, as ever, is to have the programme run as swiftly as possible, the problem, as ever, is that it is not possible for a computer programme to analyse a computer programme and always correctly determine what it does. By contrast, human programmers start at the other end with a purpose, and attempt to produce a programme that will achieve it, preferably without expending a lot of thought in the process. So, the hope is that an optimising compiler will rescue us.

For divide-and-conquer reasons, a programme is usually broken into a number of procedures, that through generality may waste effort in specific usages. Inter-procedural optimisation represents an attempt at reducing this loss.

Suppose you have a procedure that evaluates F(x) and in your invocations your code requests F(6) and then later, F(6) again. Surely this second evaluation is unnecessary: the result could have been saved, and referred to later instead. This simple optimisation is foiled the moment that the implementation of F(x) is impure, that is, its execution involves reference to parameters other than the explicit argument 6 that may have been changed, side effects such as printing some message to a log, counting the number of evaluations, and so forth. Losing these side effects via non-evaluation a second time may be acceptable, or may not: a design issue beyond the scope of compilers.

More generally, aside from organisational reasons, the second reason to use procedures is to avoid duplication of code that would be the same, or almost the same, each time the actions performed by the procedure are desired. A general approach to inter-process optimisation would therefore be to reverse this condensation: at every place the procedure is invoked, place the code of the procedure with the appropriate parameters in place of the arguments. Then, wave hands, and hope that the general optimisation features of the compiler will redact the resulting massive in-line mass into some swift-running compact viper. Or perhaps not.

 Programme example;
  Procedure Silly(a,x)
   if a > 0 then x:=a else x:=-6;
  end Silly;
  a:=7;
  Silly(a,x); Print a;
  Silly(x,a); Print a;
 End example;

Suppose that the invocations of Silly are expanded in place: the code amounts to

 a:=7;
 if a > 0 then x:=a else x:=-6; print a;
 if x > 0 then a:=x else a:=-6; print x;   %Because the parameters are swapped.

The compiler could then in this rather small example follow the constants along the logic (such as it is) and find that the predicates of the if-statements are constant and so...

 a:=7;
 x:=-6; print 7;
 a:=-6; print -6;

And since the assignments to a and x deliver nothing to the outside world (they do not appear in output statements, nor in subsequent calculations whose results do), there is no point in that code either, and so

 print 7;
 print -6;

is the result.

This is an extreme example. More likely it will be a case of many procedures, having a variety of deducible or declared properties that may enable the compiler to find some advantage. Certain parameters may be input only to the procedure, others output from that procedure. Others, especially function-like procedures will have certain behaviours that in specific invocations may enable some work to be avoided.