I tend to break a lot of keyboards. Not because I release all the aggression that I hold deep within me on them, but because I drool a lot. It's not my fault, I really do love graphical bling. Since I'm one of the people who flourishes not when he's doing well, but when others are doing just as badly I've thought about getting other people on the "excessive drooling" bandwagon.
I've tried it in the past. First with my "you ain't cool, unless you drool" campaign, which was not as successful as I've seen it be in my head. It made me realize that marketing is unlikely one of my superpowers. That was a real blow especially since it came a day after I've established that there's like a 95% chance that I can't fly and if I can, then my neighbor will be seriously pissed if I keep landing on his car. You'd think they'd build them stronger, but I digress. After that I went with my strengths and had two technical efforts. The first one led to Exa the second to Glucose. Both are acceleration architectures that try to accelerate Xrender - the API which we use for 2D on X. What Xrender is very good at is text. What Xrender is not so good at is everything else.
Furthermore what one really wants to do nowadays is use the results of a 2D rendering in a 3D environment as a texture or simply implement effects on top of the 2D rendering with shaders. Wouldn't it be nice to have a standard and simple API for all of that? You bet your ass it would. In this particular case "you bet your head" would be a more suitable expression, since by a simple act of reading this blog it's clear you already gave up on your ass and stake your future on your head. I endorse that (both the head more important than ass theory and better api idea). Currently, through the magic of DRM TTM and GLX_texture_from_pixmap one could achieve partially that (we'd need GLX_pixmap_from_texture to finish it), but if you've seen Japanese horror movies you know they got nothing on the code one ends up with, when doing that.
I already mentioned in my previous post that we can lay any number of API's on top of Gallium3D. In fact in the last diagram I already put the two graphics API's that interest me on top of it. OpenVG and OpenGL. In my spare time I've started implementing OpenVG on top of Gallium3D. I'm implementing 1.1 which hasn't been officially released yet. While OpenVG 1.0 is essentially useless for our drool-causing desktops because it doesn't even touch the subject of text handling, 1.1 does and that in itself makes it a great low-level 2D vector graphics api.
We already have OpenVG engines for Qt and Cairo which should make the switch fairly painless. "Hey", you say and I swiftly ignore you, because I have a name you know. "Sunshine", you correct yourself and I smile and say "Huh?". "I want my KDE 4 fast and smooth! Gallium3D has this 3D thing in the name and I have hardware that only does 2D, what about me?". Nothing. You need to use other great solutions. "But my hardware can accelerate blits and lines!". Awesome, then this will likely rock your world. As long as you won't try to run any new applications of course. Even embedded GPU's are now programmable and putting the future of our eye-candy on a technology that predates 2 year old embedded GPU's is an exercise in silly which my chiseled pecs refuse to engage in.
OpenVG standard says "It is possible to provide OpenVG on a platform without supporting EGL. In this case, the host operating system must provide some alternative means of creating a context and binding it to a drawing surface and a rendering thread." which is exactly what we want. That's because we already have that layer, it's GLX. GLX will do the context creation for us. This also means that we'll be able to seemingly combine 2D vector graphics and 3D and manipulate the results of vector rendering the same way we would normal 3D.
Finally 2D graphics will be accelerated the same way 3D is and those hours which you've spent playing 3D games thinking "how the hell is it possible that putting a clock on my desktop makes it choppy when this runs at 400fps?" will be just a story you'll get to tell your grand-kids (while they stare at you with the "please god, let me be adopted" look). As a bonus we get two extremely well documented API's (OpenGL and OpenVG) as our foundation and instead of having two drivers to accelerate 2D and 3D we'll have a single driver.
So what happens with Glucose? Alan and José are still working on it a bit and in a short-term it does provide a pretty enticing solution but long term OpenVG/OpenGL combo is the only thing that really makes sense.
With much love,