According to the article, Pyramid pushes a lot of work to C code. But then they measure relative performance by how much output the Python profiler has. However anything done in C is invisible to the Python profiler. That means that time is being spent and work is being done that they are not measuring.
The result is probably pretty fast. But I strongly doubt it is faster by the factor that their numbers claim.
Of course that speeds up the code. But it reduces the Python profiler output by more than it speeds up the code. Therefore measuring the speedup by looking at the Python profiler is inappropriate.
Since you believe this using C to reduce function call overhead is cheating, here's Pyramid 1.0a9 profiling output with the "zope.interface" C optimizations disabled (same methodology and code used to test the C-optimized variant):
That's 27 lines of profiler output, which, if you're keeping score, is still fewer lines than any other framework tested. We're not really hiding anything here.
The comparisons to other frameworks are largely a gimmick to drive traffic and comments, granted. In the real world, these numbers shouldn't make the difference about whether you choose one framework or another.
However, there are actual speedups to be had by other Python web frameworks if they'd consider using some of the techniques outlined in the blog post. In the real world, actual benchmarks seem to have Pyramid bottlenecked by the WSGI server (see for example http://blog.curiasolutions.com/2010/11/the-great-web-technol...), so the optimizations we're doing are working, AFAICT.
According to the article, Pyramid pushes a lot of work to C code. But then they measure relative performance by how much output the Python profiler has. However anything done in C is invisible to the Python profiler. That means that time is being spent and work is being done that they are not measuring.
The result is probably pretty fast. But I strongly doubt it is faster by the factor that their numbers claim.