In general I'm extremely good at ignoring emails and blog posts. Next to head-butting it is one of the primary skills I've developed while working on Free Software. Today I will respond to a few recent posts (all at once, I'm a mass-market responder) about accelerating graphics.
Some kernel developers released a statement saying that binary blobs are simply not a good idea. I don't think anyone can argue that. But this statement prompted a discussion about graphics acceleration, or more specifically a certain vendor who is, allegedly, doing a terrible job at it.
First of all the whole discussion is based on a fallacy rendering even the most elaborate conclusions void. It's assumed that in our graphics stack there's a straight forward way between accelerating an api and fast graphics. That's simply not the case.
I don't think it's a secret that I'm not a fan of XRender. Actually "not a fan" is an understatement I flat out don't like it. You'd think that the fact that 8 years after its introduction we still don't have any driver that is actually real good at accelerating that "simple API" would be a sign of something... anything. When we were making Qt use more of the XRender api the only way we could do that is by having Lars and I go and rewrite the parts of XRender that we were using. So what happened was that instead of depending on XRender being reasonably fast we rewrote the parts that we really needed (which is realistically just the SourceOver blending) and did everything else client side (meaning not using XRender)
Now going back to benchmarking XRender. Some people pointed out an application I wrote a while back to benchmark XRender: please do not use it to test a performance of anything. It will not respond to any real workloads. (also if you're taking something I wrote to prove some arbitrary point, it'd be likely a good idea to ping me and ask about it. You know on account of writing it, I just might have some insight into it). The thing about XRender is that there's a large amount of permutations for every operation. Each graphics framework which uses XRender uses specific, defined paths. For example Qt doesn't use server-side transformations (they were just pathetically slow and we didn't feel it would be in the best interest of our users to make Qt a lot slower), Cairo does. Accelerating server side transformations would make Cairo a lot faster, and would have absolutely no effect on Qt. So whether those tests pass with 20ms or 20hours has 0 (zero) effect on Qt performance.
What I wanted to do with the XRender performance benchmarking application is basically have a list of operations that need to be implemented in driver to make Qt, Cairo or anything else using XRender fast. "To make KDE fast look at the following results:" type of thing. So the bottom line is that if one driver has for example result of 20ms for Source and SourceOver and 26 hours for everything else and there's second driver that has 100ms for all operations, it doesn't mean that on average driver two is a lot better for running KDE, in fact it likely means that running KDE will be five times faster on driver one.
Closed sourced drivers are a terrible thing and there's a lot of reasons why vendors would profit immensely from having open drivers (which is possibly a topic for another post). Unfortunately I don't think that blaming driver writers for not accelerating graphics stack which we went out of our way to make as difficult to accelerate as possible is just a good way of bringing that point forward.