I've spent the last few weeks in London with Keith Whitwell and José Fonseca, the greatest computer graphics experts since the guy who invented the crystal ball in the Lord of the Rings, the one that allowed you to see your future in full immersive 3D. Which begs the question: do you think that there was an intermediate step to those? I mean a crystal ball that showed you your future but 2D, all pixelated and in monochrome. Like Pong. You got 8x8 pixels and then the wizard/witch really needed to get their shit together to figure out what's going on, e.g.
- "This seems to be the quintessential good news/bad news kind of a scenario: good news - you'll be having sex! Bad news - with a hippo.",
- "There's no hippos in Canada...",
- "Right, right, I see how that could be a problem. How about elephants with no trunks?",
- "None of those",
- "Walls maybe?",
- "Yea, got those",
- "Excellent! Well then you'll probably walk into a one.".
Just once I'd like to see a movie where they're figuring this stuff out. If you'll shoot it, I'll write it!
Getting back on track: we've got a lot of stuff done. I've worked mainly on the Gallium interface trying to update it a bit for modern hardware and APIs.
First of all I've finally fixed geometry shaders in our software driver. The interface for geometry shader is quite simple, but the implementation is a bit involved and I've never done it properly. Until now that is. In particular emitting multiple primitives from a single geometry shader never worked properly. Adjacency primitives never worked. Drawing indexed vertex buffer with a geometry shader never worked properly and texture sampling in GS was just never implemented. All of that is fixed and looks very solid.
Second of all stream out, aka transform feedback is in. Both in terms of interface additions and software implementation. Being able to process a set of vertices and stream them out into a buffer is a very useful feature. I still enjoy vector graphics and in there processing a set of data points is a very common operation.
I've also added the concept of resources to our shader representation, TGSI. We needed a way of arbitrarily binding resources to sampler and defining load/sample formats for resources in shaders. The feature comes with a number of new instructions. The most significant in terms of their functionality are the load and gather4 instructions because they represent first GPGPU centric features we've added to Gallium (obviously both very useful in graphics algos as well).
Staying in TGSI land, we have two new register files. I've added immediate constant array and temporary arrays. Up til now we had really no way of declaring a set of temporaries or immediates as an array. We would imitate that functionality by declaring ranges e.g. "DCL TEMP[0..20]", which becomes very nasty when a shader is using multiple indexable arrays, it also makes bounds checking almost impossible.
A while back Keith wrote Graw which is a convenience state tracker that exposes the raw Gallium interface. It's a wonderful framework for testing Gallium. We have added a few tests for all the above mentioned features.
All of those things are more developer features than anything else though. They all need to be implemented in state trackers (e.g. GL) and drivers before they really go public and that's really the next step.