Wider superscalar processors demand multiple instructions to be fetched in a single cycle for higher performance. Instructions to be fetched are not always in contiguous memory locations because of branch and jump instructions. So processors need additional logic and hardware support to fetch and align such instructions from non-contiguous basic blocks. If multiple branches are predicted as not-taken, then processors can fetch instructions from multiple contiguous basic blocks in a single cycle. However, if any of the branches is predicted as taken, then processor should fetch instructions from the taken path in that same cycle. This limits the fetch capability of a processor. loop|240x240px Consider these four basic blocks as shown in the figure that correspond to a simple if-else loop. These blocks will be stored contiguously as ABCD in the memory. If the branch D is predicted not-taken, the fetch unit can fetch the basic blocks A, B, C which are placed contiguously. However, if D is predicted taken, the fetch unit has to fetch A,B,D which are non-contiguously placed. Hence, fetching these blocks which are non contiguously placed, in a single cycle will be very difficult. So, in situations like these trace cache comes in aid to the processor. Once fetched, trace cache stores the instructions in their dynamic sequence. When these instructions are encountered again, trace cache allows the instruction fetch unit of a processor to fetch several basic blocks from it without having to worry about branches in the execution flow. Instructions will be stored in trace cache either after they have been decoded, or as they are retired. However, instruction sequence is speculative if they are stored just after decode stage.
Trace structure
A trace, also called a dynamic instruction sequence, is an entry in the trace cache. It can be characterized by maximum number of instructions and maximum basic blocks. Traces can start at any dynamic instruction. Multiple traces can have same starting instruction i.e., same starting program counter and instructions from different basic blocks as per the branch outcomes. For the figure above, ABC and ABD are valid traces. They both start at the same PC and have different basic blocks as per D's prediction. Traces usually terminate when one of the following occurs:
Trace got filled with allowable maximum number of instructions
Path associativity - traces with same starting PC but with different basic blocks can be mapped to different sets
Trace cache fill choices -
# After decode stage
# After retire stage
A trace cache is not on the critical path of instruction fetch
Hit/miss logic
Trace lines are stored in the trace cache based on the PC of the first instruction in the trace and a set of branch predictions. This allows for storing different trace paths that start on the same address, each representing different branch outcomes. This method of tagging helps to provide path associativity to the trace cache. Other method can include having only starting PC as tag in trace cache. In the instruction fetch stage of a pipeline, the current PC along with a set of branch predictions is checked in the trace cache for a hit. If there is a hit, a trace line is supplied to fetch unit which does not have to go to a regular cache or to memory for these instructions. The trace cache continues to feed the fetch unit until the trace line ends or until there is a misprediction in the pipeline. If there is a miss, a new trace starts to be built. The Pentium 4's execution trace cache stores micro-operations resulting from decoding x86 instructions, providing also the functionality of a micro-operation cache. Having this, the next time an instruction is needed, it does not have to be decoded into micro-ops again.
Disadvantages
The disadvantages of trace cache are:
Redundant instruction storage between trace cache and instruction cache and within trace cache itself.
Power inefficiency and hardware complexity
Execution trace cache
Within the L1 cache of the NetBurst CPUs, Intel incorporated its execution trace cache. It stores decoded micro-operations, so that when executing a new instruction, instead of fetching and decoding the instruction again, the CPU directly accesses the decoded micro-ops from the trace cache, thereby saving considerable time. Moreover, the micro-ops are cached in their predicted path of execution, which means that when instructions are fetched by the CPU from the cache, they are already present in the correct order of execution. Intel later introduced a similar but simpler concept with Sandy Bridge called micro-operation cache.