With the current dual compiler model, it is not possible to move faster to optimized code. Optimization can be accelerated, but at some point you can only achieve this acceleration by removing the optimization paths, which reduces peak performance.
Sparkplug is designed to translate quickly. It’s so fast that we can pretty much compile it whenever we want, allowing us to move to Sparkplug code more robustly than we can to TurboFan code. Google says.
One of the reasons for this speed is that the functions that Sparkplug compiles have already been compiled to bytecode, and the bytecode compiler has already done the heavy lifting. On the other hand, Sparkplug does not generate any intermediate representation (IR) like most compilers do. Instead, Sparkplug compiles directly to machine code in a single linear path through the bytecode, freeing up code that matches this implementation of bytecode.
Since Sparkplug does not generate any intermediate representations, the possibilities for optimization are limited, but this is not a problem because there is an optimized compiler in the pipeline.
Google presents how Sparkplug works in this very interesting Technical note.