Code review comment for lp:~jtv/storm/profile-fetches

Jeroen T. Vermeulen (jtv) wrote :

I'd be more careful in assuming similarity with a JIT VM. Differences in relative overhead aside, JIT compilers don't specialize method calls much. Actually there is a JVM that combines JIT with inlining of all code, but from what I hear that actually yields fantastic results. A lot of the startup overhead would be in the inlining, which doesn't come up per se in fetch profiling.

As an example of specialization, if Launchpad used automatic optimization based on this profiler, a given query method somewhere deep down the call stack gets optimized separately when used from the web service API or when used from the web UI. The two calls have very different needs. We don't have any decent solution for that at the moment.

If you're willing to be as aggressive in fetching objects as to do it "statically," then you might as well use automatic optimization with a warmup time of 1: generate optimization advice after a first pass through a stretch of code, then repeat periodically to cover any objects that may also be needed but weren't referenced in that first run. To amortize startup cost over more requests, pickle the optimization choices and presto: profile-driven optimizations get reused across restarts.

Separately from that, the term "efficient" is treacherous. A static approach is almost guaranteed to be more efficient in terms of computing power, yes, but less efficient when it comes to the human factors: flexibility, legibility, conceptual cleanliness. (Isn't that why we're using python in the first place?) A dynamic approach on the other hand can narrow the gap in computational efficiency without sacrificing any of those other efficiencies. It's also easier to deploy and fine-tune such optimizations across the entire application.


« Back to merge proposal