I missed you so much. Yes, you. No, not you. You. I couldn't blog for a while and I ask you (no, not you) what's the point of living if one can't blog? Sure, there's the world outside of computers, but it's a scary place filled with people that, god forbid, might try interacting with you. Who needs that? It turns out that I do. I've spent the last week in Portland on the OpenCL working group meeting which was a part of the Khronos F2F.
For those who don't know (oh, you poor souls) OpenCL is what could be described as "the shit". That's the official spelling but substituting the word "the" for "da" is considered perfectly legal. Longer description includes the expansion of the term OpenCL to "Open Computing Language" with an accompanying wikipedia entry. OpenCL has all the ingredients, including the word "Open" right in the name, to make it one of the most important technologies of the coming years.
OpenCL allows us to tap into the tremendous power of modern GPUs. Not only that but also one can use OpenCL with accelerators (like the physics chips or Cell SPU's) and CPUs. On top of that hardware OpenCL provides both task-based and data-based parallelism making it a fascinating options for those who want to accelerate their code. For example if you have a canvas (Qt graphicsview) and you spend a lot of time doing collision detection, or if you have image manipulation application (Krita) and you spend a lot of time in effects and general image manipulation, or if you have a scientific chemistry application with an equation solver (Kalzium) and want to make it all faster, or if you have wonky hair and like to dance polka... OK, the last one is a little a fuzzy but you get the point.
Make no mistake, OpenCL is little bit more complicated than just "write your algorithm in C". Albeit well hidden, the graphics pipeline is still at the forefront of the design, so there are some limitations (remember that for a number of really good and few inconvenient reasons GPUs do their own memory management so you can not just move data structures between main and graphics memory). It's one of the reasons that you won't see a GCC based OpenCL implementation any time soon. OpenCL requires run time execution, it allows sharing of buffers with OpenGL (e.g. OpenCL image data type can be constructed from GL textures or renderbuffers) and it forces code generation to a number of different targets (GPUs, CPUs, accelerators). All those things need to be integrated. For sharing of buffers between OpenGL and OpenCL the two APIs need to go through some kind of a common framework - be it a real library or some utility code that exposes addresses and their meaning to both OpenGL and OpenCL implementations.
Fortunately we already have that layer. Gallium3D maps perfectly to the buffer and command management in OpenCL. Which shouldn't be surprising given that they both care about the graphics pipeline. So all we need is a new state tracker with some compiler framework integrated to parse and code generate from the OpenCL C language. LLVM is the obvious choice here because unlike GCC, LLVM has libraries that we can use for both (to be more specific it's Clang and LLVM). So yes, we started on an OpenCL state tracker, but of course we are far, far away from actual conformance. Being part of a large company means that we have to go through extensive legal reviews before releasing something in the open so right now we're patiently waiting.
My hope is that once the state tracker is public the work will continue at a lot faster pace. I'd love to see our implementation pass conformance with at least one GPU driver by summer (which is /hard/ but definitely not impossible).
27 comments:
It's great to see that you guys are working on a OpenCL implementation!
The internet has indeed been a dark, dreary, and nigh-on-miserable place without your occasional quirky graphics-ninja input added to the mix. Welcome back sir! :)
As one of your long time secret admirers, I am glad to have you back!
OpenCL is one of the most exciting developments in computing right now. I am very glad you are recognizing that.
While kudos and curtseys belong to the OpenCL, CLang and LLVM Developers and supporters - People you can mention Apple, you will not lose your SOUL - don't be hypocrite
THX for your interess on OpenCL
Darwin OS
OpenCL on a graphics driver?
Why? isn't a more logical and clean place to have a singular OpenCL implementation as a state tracker to Gallium3D besides the norm ones like Mesa?
Of course by directing Mesa traffic through a OpenCL layer could make the OpenCL layer act as a central dispatcher that could have profiles of different CPUs and GPUs... ;-)
@Anonymous:
It will largely be a state tracker, but LLVM has to know what to do with the C-ish code that it gets and is told to compile. The output of that is going to depend on the architecture of each graphics card, because they aren't all all common like x86 chips.
you are now working at vmware, aren't you? may i ask how many people work there now on gallium3d/x.org etc.?
thanks in advance
@Matthew:
Yep, like storing all diff GPU and CPU ISAs to a LLVM IR table that the OpenCL state tracker would then evaluate (inst cycles of diff IRs, the shortest cycle winning the task) and dispatch.
Of course that might bring a pretty nasty overhead/latency to the dispatcher... too bad there is no Hydra/PLX bridge chip with such dispatched firmware/hardware logic available on PC arch yet, so for now such stuff would stay at CPU software level...
> a scary place filled with people that,
> god forbid, might try interacting with
> you. Who needs that?
> It turns out that I do.
you're really scaring me...
Sounds like great work, I can't wait to get my hands on a working implementation of OpenCL. Thanks for filling us in on the latest developments!
@Darwin OS: switch to decaf and breath... In case this wasn't obvious by, you know, like reading, I didn't write a blog explaining the history of OpenCL, if I did surely I would have mentioned Apple. I did post a link to wikipedia entry that talks about Apple. Unless of course you claim that every single time anyone mentions the name "OpenCL" it has to be prefixed with a phrase"Apple started this technology". In which case I'd like to point out that I also didn't add "SGI started this technology" when mentioning OpenGL and you didn't feel it's necessary to insult me for that. That begs the question: what do you have against SGI? Why do you hate SGI? WHY? Stop all that hate!
@code generation questions: it's not that bad, we code generate in a similar way we do for all other shaders. in gallium the ptx layer is tgsi and llvm ir.
@Zack
Ah, TG Shader Infra... Why had I forgotten about that, that's an logical layer to do ISA stuff, now only if that would evaluate the whole CPU+GPU cycle timing stuff. Actually working a bit like a kernel syscall layer... ;-)
Sweeeeeeet.
Sounds interesting :)
I hope you don't expect OpenCL via Gallium to become a 'standard' mainstream thing... sure, it's great for open source drivers where development manpower is low - and to get a quick/dodgy implementation up and running sooner rather than layer. But any IHV writing their own drivers will immediately see the pitfalls of forcing them down a primarily GPU oriented driver layer, especially those who don't actually make GPUs (texas instruments, arm, IBM, etc) - not to mention the fact that even GPU drivers don't want to be forced down a GPU-centric path. (You hardly want an OpenCL implementation to 'require' OpenGL, hardware accelerated or not.)
In terms of open source OpenCL drivers though, I guess it's a good start if you don't care about about the overhead involved...
Hi,
i'm work to a linux implementation of opencl for zip/unzip tar or other things like that...
If anyone, needs any help, i'm ready to help you to make a Opencl Projet.
Regards,
Thomas.
So will we see an OpenCL kernel to LLVM IR compiler anytime soon?
@Mitch: that's a hard to comment to argue with. Mainly because it's utterly wrong and I'd obviously need to start explaining how Gallium works to make a point. I refuse to do that in comments. Then again maybe instead of writing long and stupid comments you could read up yourself on all the technologies involved...
Do you think OpenCL could be implemented through Gallium3D with a pure-CPU backend, for systems without a GPU? Perhaps this would use TGSI's CPU-based execution?
Do you think this would introduce an unacceptable layering overhead?
@Ben Schwartz: I think it will work perfectly. In fact it already kind of does, as this is how it works with the Gallium3D Cell driver.
For an efficient CPU implementation though we'll simply need to make some small changes to the softpipe driver (which is the CPU based backend for Gallium3D) which is something all other APIs will profit from as well.
So yes, we're aiming at a very efficient GPU/CPU/accelerator implementation.
great work!
so, should we expect a Clang version for opencl -> LLVM anytime soon?
Zack, since you started speaking about getting LLVM compiler into Gallium3D, several of us went ravening about GPGPU, thinking that Gallium3D would make a perfect back-end for this...
Now OpenCL has come as the perfect API for GPGPU to stack above Gallium3D.
And today I read that you're indeed developping OpenCL for Gallium3D.
THIS REALLY FILLS ME WITH A WARMTH OF HAPPINESS !
May your inner Ninja/Prince/Earl/King-ship be blessed for lots of generations (which you, as having exactly 0 babies, might claim for your very own use instead of the use of non-existant generations).
I am looking forward to seeing OpenCL implementation as open source. I hope that apple will release its implementation back to the LLVM community.
What are going to be the hardware requirements for OpenCL to work in Gallium?
ps. You're our hero Zack-san! Banzai! banzai!
Are there any news/advancements over OpenCL support with Gallium?
We are eagerly waiting to access GPGPU resources using a full open source stack...thanks for any reply
Hi Zack I am currently working on a openCL project and need your help in coding.. could you please reply to the following mail id.
nikhil.niki2006@gmail.com
or
Hello,
Sorry to post on such an old blog entry, but it's the only mean I found to contact you.
I'm the Google Summer of Code working on Clover, and even after reading carefully its source code, I don't know its license.
So, what is the license under which Clover is released ?
Thanks,
Denis Steckelmacher.
Post a Comment