Thursday, May 24, 2007

Mesa and LLVM

I've been toying for a while with the idea of rewriting programmable pipeline in Mesa. The most obvious reason is the fact that fragment shaders are absolutely crucial when it comes to modern graphics. We use them extensively in Qt and I'm using it all over the place in my special effects library. As those small (mostly GLSL based) programs become more and more complicated a need for an extensive compiler framework with especially good optimization support becomes apparent. We already have such a framework in LLVM.

I managed to convince the person I consider an absolutely genius when it comes to compiler technology, Roberto Raggi to donate some of his time and together with me rewrite the programmable pipeline in Mesa. The day after I told Roberto that we need to lay Mesa on top of LLVM I got an email from him with GLSL parser he wrote (and holly crap, it's so good...). After picking up what was left of my chin from the floor I removed the current GLSL implementation from Mesa, integrated the code Roberto sent me, did some ninja magic (which is part of my job description) and pushed the newly created git repository to

So between layers of pure and utter black magic (of course not "voodoo", voodoo and graphics just don't mix) what does this mean, you ask (at least for the purpose of this post). As I pointed out in the email to the Mesa list last week:
  • it means that Mesa gets an insanely comprehensive shading framework
  • it means that we get insane optimization passes for free (strong emphasis on "insane". They're so cool I drool about being able to execute shading languages with this framework and I drool very rarely nowadays... And largely in a fairly controllable fashion.)
  • it means we get well documented and understood IR,
  • it means we get maintenance of parts of the code for free, (the parts especially difficult for graphics people)
  • it means that there's less code in Mesa,
  • it means that we can basically for free add execution of C/C++, soon Python, Java and likely other languages, code on GPU's because frontend's for those are already available/"in work" for LLVM. (and even though I'm not a big fan of Python the idea of executing it on GPU is giving me goose-bumps the way only some of Japanese horror movies can)
I think it has all the potential to be by far the best shading framework in any of the OpenGL implementations out there. Now having said that there's a lot of tricky parts that we haven't even begin solving. Most of them are due to the fact that a lot of modern graphics hardware is, well, to put it literary "freaking wacky" (it's a technical term). We'll need to add pretty extensive lowering pass and most likely some kind of transformation pass that does something sensible with branch instructions for hardware that doesn't have support for them. We'll cross that bridge once we get to it. Plus we'll need to port drivers... But for now we'll bask in the sheer greatness of this project.

Ah, the git repository is at;a=shortlog;h=llvm , Roberto and I have tons of unpushed changes though . Of course this is an ongoing research project that both Roberto and I work on in our very limited spare time (in fact Roberto seems to now have almost what you'd call a "life". Apparently those take time. Personally I still enjoy sleepless nights and diet by starvation patched by highly suspicious activities in between. Which by the way does wonders to my figure and if this is not going to work I'll try my luck as a male super-model) so we can only hope that it will all end up as smoothly as we think it should. And in KDE 4 most graphics code will be able to utilize eye-popping effects with virtually no CPU price.


peppelorum said...

Not that understand all in the post but it seems just excellent=)

But And in KDE 4 most graphics code will be able to utilize eye-popping effects with virtually no CPU price I understood completely=)

illissius said...

While nifty, wouldn't Python on a GPU be shit-fucking slow? GPUs are good at parallel, not sequential, processing, and an interpreted language sounds like it would exacerbate this manyfold.
This seems more like "because I can" than actually useful, to me. But it *is* totally awesome. (Like, holy shit. It's like you leapfrogged the entire rest of the industry.)

Anonymous said...

You the MAN, Zack san.

Anonymous said...

illissius, don't think you understand. Python wouldn't be run on the GPU. From what I gather, someone at LLVM is developing a Python frontend which together with the backend would then turn Python into LLVM bytecode and run through the same code generation path as the LLVM C++ frontend (a retargetted g++) produces. IOW, the GPU won't know the difference.

Scorp1us said...

What is the special effects library to which you refer?

Saem said...

@illissuis: Think Nvidia's C derivative for shading, the program gets compiled down and are then run. The GPU shouldn't have to play interpreter, merely execute the instructions produced by the compiler (LLVM) -- the compiler will be run by the CPU.

IIRC, Apple is using LLVM for this exactly. Also, from what I can remember a long time ago on LLVM's mailing lists, some graphics company is also using LLVM in such a regard.

Anders Norgaard said...

Is this code the "proposal" that Keith Whitwell talks about here


Brandybuck said...

I'll be buying a new computer in a couple of months. What video cards support both Open Source drivers and all the new Qt/KDE eyecandy?

Anonymous said...

"I'll be buying a new computer in a couple of months. What video cards support both Open Source drivers and all the new Qt/KDE eyecandy?"

Current video cards? None.

(the intel igp isn't a "card" and its performance is abysmal anyway)

Patrice said...

Wow! Double Wow! No... I mean Wow.

Lukas Fittl said...

Great, so we can also finally replace the nvidia-cg-toolkit with a free alternative without any problems :)

(people who want to be compatible with both directx and opengl use cg, and IMHO there is only the non-free nvidia toolkit available atm.)

Anonymous said...

How does this compare to MS Research Accelerator?
and (see Accelerator section)

Zack said...

Hi, thank you for all the comments!

To answer all of them in one go:
1) I'll be talking about the special effects on akademy.
2) Yes, Apple is already using LLVM in their OpenGL implementation but only for the software path.
3) Partially, Keith is working on a lot of other amazing things and he's focusing on different areas right now.
4) Personally I'll be working on Intel drivers first and Intel hardware, especially 965's is what's I'd recommend, especially for general desktop usage(they're not there yet though). If closed source doesn't bother you, then NVIDIA drivers are still the best.
5) Microsoft Research Accelerator is focusing on general parallel aspects of programming that map to the next-gen hardware. We're focusing more on general architecture that would permit us to incredibly optimize dedicated shading languages while providing means of running programs in any arbitrary language (most of them by definition not parallel at all) on the GPU.

Anonymous said...

I bestow the blue crayon of honor upon thee, turkey