Scientific computing and image processing applications access
amounts of data in regular patterns. In order to relieve the
bottleneck, caching tries to keep recently referenced data
in fast storage. This is increasingly important as the gap
processor and memory hierarchy speed has widened in recent
There are two main difficulties that cannot be dealt with by
alone. Firstly, a cache line usually holds several values; often
one of them is actually used. Secondly, conflicting accesses to
cache line cause data to be evicted which is still required.
In the extreme case, data is replaced on every access, a
denote as cache thrashing. To overcome these problems, the
temporal/spatial structure of accesses has to be changed.
Compile-time cache optimizations exploit regular access
Loop transformations as e.g. tiling are well-established.
For caches with limited associativity, it is often crucial to
additionally adjust the placement of data in memory.
We show that compiler techniques for register allocation, namely
graph coloring, support a systematic data placement.
For innermost loops, conflicts and temporal reuse can be modeled
together in a cyclic interval graph. If reuse stretches over
loop iterations, live ranges may overlap themselves, prohibiting
By applying the meeting graph method, the compiler can determine
unrolling factor and determine the maximal number of colors,
cache lines required.
Values of the same color are mapped to memory together. Since
values may stem from different data structures, our technique
a natural way of dealing with conflicts between different
We implement this scheme through modifying the standard memory
mapping. At run time, the new compile time mapping function is
as index function.
The tradeoff between the additional cost for more complex
and reduced miss penalty is reflected by a cost function.
For typical example codes from the above areas, reuse and
behavior are considerably improved, yielding moderate run time
reductions. In addition, on a more coarse level, paging activity
often significantly postponed.