I managed to convince the person I consider an absolutely genius when it comes to compiler technology, Roberto Raggi to donate some of his time and together with me rewrite the programmable pipeline in Mesa. The day after I told Roberto that we need to lay Mesa on top of LLVM I got an email from him with GLSL parser he wrote (and holly crap, it's so good...). After picking up what was left of my chin from the floor I removed the current GLSL implementation from Mesa, integrated the code Roberto sent me, did some ninja magic (which is part of my job description) and pushed the newly created git repository to freedesktop.org.
So between layers of pure and utter black magic (of course not "voodoo", voodoo and graphics just don't mix) what does this mean, you ask (at least for the purpose of this post). As I pointed out in the email to the Mesa list last week:
- it means that Mesa gets an insanely comprehensive shading framework
- it means that we get insane optimization passes for free (strong emphasis on "insane". They're so cool I drool about being able to execute shading languages with this framework and I drool very rarely nowadays... And largely in a fairly controllable fashion.)
- it means we get well documented and understood IR,
- it means we get maintenance of parts of the code for free, (the parts especially difficult for graphics people)
- it means that there's less code in Mesa,
- it means that we can basically for free add execution of C/C++, soon Python, Java and likely other languages, code on GPU's because frontend's for those are already available/"in work" for LLVM. (and even though I'm not a big fan of Python the idea of executing it on GPU is giving me goose-bumps the way only some of Japanese horror movies can)
Ah, the git repository is at http://gitweb.freedesktop.org/?p=users/zack/mesa.git;a=shortlog;h=llvm , Roberto and I have tons of unpushed changes though . Of course this is an ongoing research project that both Roberto and I work on in our very limited spare time (in fact Roberto seems to now have almost what you'd call a "life". Apparently those take time. Personally I still enjoy sleepless nights and diet by starvation patched by highly suspicious activities in between. Which by the way does wonders to my figure and if this is not going to work I'll try my luck as a male super-model) so we can only hope that it will all end up as smoothly as we think it should. And in KDE 4 most graphics code will be able to utilize eye-popping effects with virtually no CPU price.
14 comments:
Not that understand all in the post but it seems just excellent=)
But And in KDE 4 most graphics code will be able to utilize eye-popping effects with virtually no CPU price I understood completely=)
While nifty, wouldn't Python on a GPU be shit-fucking slow? GPUs are good at parallel, not sequential, processing, and an interpreted language sounds like it would exacerbate this manyfold.
This seems more like "because I can" than actually useful, to me. But it *is* totally awesome. (Like, holy shit. It's like you leapfrogged the entire rest of the industry.)
You the MAN, Zack san.
illissius, don't think you understand. Python wouldn't be run on the GPU. From what I gather, someone at LLVM is developing a Python frontend which together with the backend would then turn Python into LLVM bytecode and run through the same code generation path as the LLVM C++ frontend (a retargetted g++) produces. IOW, the GPU won't know the difference.
What is the special effects library to which you refer?
@illissuis: Think Nvidia's C derivative for shading, the program gets compiled down and are then run. The GPU shouldn't have to play interpreter, merely execute the instructions produced by the compiler (LLVM) -- the compiler will be run by the CPU.
IIRC, Apple is using LLVM for this exactly. Also, from what I can remember a long time ago on LLVM's mailing lists, some graphics company is also using LLVM in such a regard.
Is this code the "proposal" that Keith Whitwell talks about here
http://lists.cs.uiuc.edu/pipermail/llvmdev/2007-May/009110.html
?
I'll be buying a new computer in a couple of months. What video cards support both Open Source drivers and all the new Qt/KDE eyecandy?
"I'll be buying a new computer in a couple of months. What video cards support both Open Source drivers and all the new Qt/KDE eyecandy?"
Current video cards? None.
(the intel igp isn't a "card" and its performance is abysmal anyway)
Wow! Double Wow! No... I mean Wow.
Great, so we can also finally replace the nvidia-cg-toolkit with a free alternative without any problems :)
(people who want to be compatible with both directx and opengl use cg, and IMHO there is only the non-free nvidia toolkit available atm.)
How does this compare to MS Research Accelerator?
ftp://ftp.research.microsoft.com/pub/tr/TR-2005-184.pdf
and http://research.microsoft.com/act/ (see Accelerator section)
Hi, thank you for all the comments!
To answer all of them in one go:
1) I'll be talking about the special effects on akademy.
2) Yes, Apple is already using LLVM in their OpenGL implementation but only for the software path.
3) Partially, Keith is working on a lot of other amazing things and he's focusing on different areas right now.
4) Personally I'll be working on Intel drivers first and Intel hardware, especially 965's is what's I'd recommend, especially for general desktop usage(they're not there yet though). If closed source doesn't bother you, then NVIDIA drivers are still the best.
5) Microsoft Research Accelerator is focusing on general parallel aspects of programming that map to the next-gen hardware. We're focusing more on general architecture that would permit us to incredibly optimize dedicated shading languages while providing means of running programs in any arbitrary language (most of them by definition not parallel at all) on the GPU.
I bestow the blue crayon of honor upon thee, turkey
Post a Comment