SSA in Compilers
This cluster focuses on discussions about Static Single Assignment (SSA) form in compiler intermediate representations (IR), particularly LLVM, including its use for optimizations, transformations, and comparisons to existing compiler techniques.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Just a guess: it's probably static single assignment for the IR.https://www.cs.cmu.edu/~fp/courses/15411-f08/lectures/09-ssa...
We're all already doing this as the compiler turns everything into SSA form, silly goose.
It's painful for the compiler too, it has to turn the parents imperative code into SSA (essentially A-normal form) so it can optimise it!
Not really, simplifying simple operations (like multiply by a constant) is pretty much Compilers 101 / Dragon Book kind of stuff
Are there production compilers who don't do SSA nowadays?
Doesn't the LLVM backend make tons of similar optimizations under the hood?
You'd need to reimplement LLVM code generation and all. These optimizations only work in that part of the compilation process; do it on your own IR and after lowering to LLVM you'll still want to do it again.
Is someone working on similar ideas for compiler back-ends?
You should look at supercompilation.
Sure, these days I'm mostly working on a few compilers. Let's say I want to make a fixed-size SSA IR. Each instruction has an opcode and two operands (which are essentially pointers to other instructions). The IR is populated in one phase, and then lowered in the next. During lowering I run a few peephole and code motion optimizations on the IR, and then do regalloc + asm codegen. During that pass the IR is mutated and indices are invalidated/updated. The important thing is that t