Consider a simple search problem with multiple paths. The main processing path can be computed in series as required functionality is identified. Completion occurs whenever an adequate solution is found. Alternatively, potential paths can be searched in batches or all simultaneously ahead of time**. This diminishes the wait for required functionality at the cost of additional processing, some of which may never be necessary. We trade computation power for statistically lower execution time.
Speculative execution is a time saving technique used in code execution. The method extrapolates what may be required, executes those branches, and generates intermediate results which are used for required computation paths as needed. Resources are risked on branches that may never be needed so their judicious use is required to maintain efficiency (power concerns, etc). Let's hit the ground running with an example of how speculative execution is used in practice.
Modern pipelined microprocessors use speculative execution to reduce the cost of conditional branch instructions. When a conditional branch instruction is encountered, the processor guesses which way the branch is most likely to go (this is called branch prediction), and immediately starts executing instructions from that point. If the guess later proves to be incorrect, all computation past the branch point is discarded
Conditions for Compiler Optimization
As an eternal sophomore of software^, I've encountered quite a few technical presentations on compiler optimization which leave me baffled. I've become familiar with many design motivations, but docs regularly refer to vocabulary which I don't understand. This leads to never ending web reading sessions. In the beginning my confusion began with references to functional programming and lazy evaluation. Since first encountering these terms, I've become more comfortable with their meaning, transcending the spooky animal phase. Lazy evaluation is simply delayed computation until a result is required. Pure functional programs eliminate all intermediary variables by limiting the syntax to describing operations. In practice functions which don't have side affects are dazy chained together to result in values*.
Functional programming enable compilers to optimize calculations using techniques like deforestation. Having context for these terms is necessary to achieve introductory comprehension of compiler design.
[caption id="attachment_5335" align="aligncenter" width="425" caption="no, not this kind of deforestation"][/caption]
Another regularly used term is JIT or just in time compilation. This architecture blends compilation and continuous interpretation for maximum efficiency. I touched briefly on just in time compiling when predicting changes to programming in the coming years.
One large project I can't help but cross paths with while researching compiler optimization is the LLVM or low level virtual machine. Here's a great introduction by Chris Lattner, the primary author of the project. It's an ambitious project to provide compile time, link time, run time, and idle time optimization of programs from arbitrary programming languages.
To see the LLVM in action I've included a presentation on leveraging the LLVM to execute Ruby (Rubinius) by Evan Phoenix:
I'll leave off with a more recent post by Evan on how Ruby is designed to run fast with Rubinius and take advantage of the LLVM.
** = are quantum computers a flavor of speculative execution? The search analogy could use some work, as I reread it, I've comparing series versus parallel execution not quite required versus potentially required processing.
* = The web definition supplement to my laymen understanding:
- A method of writing programs that describes only the operations to be performed on the inputs to the programs, without the use of temporary variables to store intermediate results
- In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data.