Friday, November 02, 2007

Gallium3D LLVM

I've seen the future. The blurry outlines sketched by such briliant audio-visual feasts as Terminator came to fruition as in the future the world is ruled by self-aware software.
That's the bad news. The good news is that we haven't noticed.
We're too busy due to the fact that we're all playing the visually stunning "The Good Guy Kills the Bad Guys 2" on Playstation 22. It really captured the essence of the first. The theaters are ruled by "Animals - Did they exist?" documentary with some stunning CG of a horse, although I could have sworn that horses had 4, not 3 legs but then again I'm no nature expert.
What everyone was wrong about though is which program became self-aware and exerted its iron-fist like dominance upon the unsuspecting humans and the last few, very suspicious cockroaches. It wasn't a military mistake. It was a 3D framework that evolved.

But lets start from the beginning. First there was Mesa, the Open Source implementation of the OpenGL specification. Then there came Gallium3D, a new architecture for building 3D graphics drivers. Gallium3D modeled what modern graphics hardware was doing. Which meant that the framework was fully programmable and was actually code-generating its own pipeline. Every operation in Gallium3D was a combination of a vertex and a fragment shader. Internally Gallium3D was using a language called TGSI - a graphics dedicated intermediate representation.

Gallium3D was generating vertex and fragment shaders at run-time to describe what it was about to do. After that system was working, some engineers decided that it would make sense to teach Gallium3D to self-optimize the vertex/fragment programs that it, itself, was creating. LLVM was used for that purpose. It was used because it was an incredible compiler framework with a wonderful community. The decision proved to be the right one as Gallium3D with LLVM proved to be a match made in heaven. It was pure love. I'm not talking about the "roll over onto your stomach, take a deep breath, relax and lets experiment" kind of love, just pure and beautiful love.

So lets take a simple example to see what was happening. Lets deal with triangles, because they're magical.
Now to produce this example Gallium3D was creating two small programs. One that was run for every vertex in the triangle and calculated its position - it was really just multiplying the vertex by the current modelview matrix - that was the vertex shader. The other program was run on every fragment in this figure to produce the resulting pixels - that was the fragment shader. To execute these two programs they were being compiled into LLVM IR, LLVM optimization passes were run on them and LLVM code generators were used to produce executable code. People working on Gallium3D quickly noticed that, even though, their code wasn't optimized at all and it was doing terribly expensive conversions all the time, it was up to 10x faster with LLVM on some demos. They knew it was good.

So Gallium3D was in essence, at run-time, creating and optimizing itself. Which lead many Free Software enthusiast to create, wear and rarely even wash shirts with a slogan "We might not have a billion dollar budget but our graphics framework is smarter than all the people in your company together".

Then in the year 2113 Gallium3D got bored with just creating graphics and took control of the entire world. Which realistically speaking wasn't hard to do because we were willingly immersing ourselves in the worlds it was creating for us anyway.

But that's still many, many years away from our boring present. So for now, while you wait for sex robots, dinners in a tube and world without insects (or for that matter absolutely any animals at all) you can just go, get and play with Gallium3D where LLVM is used. At the moment only in the software cases, but the fifth of November is going to mark the first day in which work on code-generating directly for GPU's using LLVM is going to start.

Remember, remember the fifth of November... (oh, come on that's one heck of an ending)

13 comments:

Anonymous said...

Luckily enough cheese in London is not brown, and you can get along as vegetarian quite well. (Animals, why?)

So yeah, no imported cheese for you Mr Rusin, but keep rocking anyway.

-- sebas

Anonymous said...

I'm waiting for the parenthesis to reappear. Then perhaps someone will but a wall of blinking lights on the side of their machine, one per processing element. Then someone will write tetris for those lights, and it will be optimized but still playable.

We're on the second major re-invention. It won't be until the third until these ideas take off. sorry.

Leo S said...

Wow, that made no sense to me. Can we go back to the relax and experiment kind of love? However I did like the "First Tri" window. Very punny.

Could you explain what Gallium3D means to users? Preferably in Haiku form, that's how I learn best.

Anonymous said...

Zack, you rock my world.

johndrinkwater said...

Oi, stop corrupting British historical references with immensely awesome 3D graphical optimisation work!

As ever, I am taken aback. Please sir, I want some more.

Alex Fuller said...

Now what about them geometry shaders? You cannot take over the world skynet-style without them geometry shaders!

For laymans terms of what is happening, 3D dots and 2D texture dots programming goes in, uber-fast rendering on the GPU comes out with uber-optimizations! These are the raw stream processes your modern GPU overlords are doing to make sexy visuals, and if your one can't hack it? Your trusty CPU with save you seamlessly so you don't need to miss out!

This is much faster than the current way of doing things which are fairly fixed function (oldskool graphics) with shaders stuck on top. This is ok for older video cards but new ones aren't used to their full potential (plus if your video is strictly fixed/old anyway, gallium3d's vertex/fragment pipes will do it in software).

From what I gather, any cool graphics API whether it be opengl 2.0, 3.0, openvg, that one grandma invented, _wined3d_, etc. can sit on top and rake in the benefits of the turbo vertex & fragment pipes exposed by gallium3D :) at the moment with the current stack you only have mesa (opengl2.1) to play with.

Please correct me if I'm wrong in my ramblings zack, I'm not a GPU programmer but I'm a huge enthusiast who has done a little research over the years :)

Anonymous said...

QOTD? week? month?

"I'm not talking about the "roll over onto your stomach, take a deep breath, relax and lets experiment" kind of love, just pure and beautiful love." - Zack Rusin

Anonymous said...

Since Glucose is an acceleration architecture for X, I assume that if the new Gallium3D drivers want to, they can take advantage of it. Or what?

Anonymous said...

Other way around, Matt. Gallium is a driver architecture. It defines how the graphics APIs (like OpenGL, which glucose uses to accelerate the desktop) communicate with the GPU. One of Gallium's huge advantages is that it isn't tied to a particular API, so the same driver can be used to accelerate OpenGL 2, 3, Glucose, or what-have-you.

The thing I've thought of is, if you could accelerate wine3d directly with the Gallium driver, it wouldn't be necessary to go through the convoluted process of trying to translate DirectX commands into OpenGL. Just think! A native DirectX implementation for Linux! You might then be able to run Windows games with NO performance hit, or potentially with even better performance.

Am I right about this?

Anonymous said...

I can't wait until this thing is stable enough to try to throw some of BrookGPU's newest opengl backend on it.
Optimizing shaders could be very beneficial to GPGPU too !

Also speaking about shaders and GPGPU, in addition to Geometry shader mentioned by other commentators, will it be possible to expose the random access abilities of modern GPU ?

Most current GPU can do random read and writes. It is very important for GPGPU because it enables "Scatters" ie.: a stream process can write its output in random address instead of in-order.

Saddly this ability isn't exposed in graphic APIs (Neither OGL nor DX3D).
So currently, either one has to use proprietary toolkits (Cuda for nVidia and CTM/CAL for ATI) or one has to resort to contrived way to achieve the same result by abusing the way vertex fragment and particles works (basically using vertex to build a list of address where to write, and then use a second pass to write those address into the target buffer as particles).
If Gallium3D could manage a way to export this kind of functionality, it would be possible to do fully hardware-accelerated scatter/gatter in multiple-target frameworks like Brook or RapidMind (with LLVM optimized kernels).

Anonymous said...

...in fact it could be so much interesting that if you manage give a GPGPU-helping hand, we could all start leaving nice sandwiches and good googles for you in you next bush.

Another question :
What is the relation ship of Gallium3D and DRI2 ?
I've read on DRI's website that they are planning to introduce a newer DRI framwork.
Will they co-exist with Gallium3D (because both work on separate levels and Gallium3D would be a new nicer back-end to Mesa, with added benefit of being able to be back-end to other technology like WineD3D) or are they mutually exclusive (because I'm afraid Gallium is attempting to move some steps out of the DRI and that maybe DRI2 people are thinking adding more functionnality into it) ?

Thank you.

berkus said...

Blast from the past, Zack, where are moar updates?

Anon said...

Is Gallium still using LLVM as extensively in GPU paths or has its usage been scaled back as suggested in http://news.slashdot.org/comments.pl?sid=1761364&cid=33322340 ?