There are only two tasks harder than writing Free Software graphics drivers. One is running a successful crocodile petting zoo, the other is wireless bungee jumping.
In general writing graphics drivers is hard. The number of people who can actually do it is very small and the ones who can do it well are usually doing it full-time already. Unless the company, which those folks are working for, supports open drivers, the earliest someone can start working on open drivers is the day the hardware is officially available. That's already about 2 years too late, maybe a year if the hardware is just an incremental update. Obviously not a lot of programmers have the motivation to do that. Small subset of the already very small subset of programmers who can write drivers. Each of them worth their weight in gold (double, since they're usually pretty skinny).
Vendors who ship hardware with GNU/Linux don't see a lot of value in having open graphics drivers on those platforms. All the Android phones are a good example of that. Then again Android decided to use Skia, which right now is simply the worst of the vector graphics frameworks out there (Qt and Cairo being two other notables). Plus having to wrap every new game port (all that code is always C/C++) with NDK is a pain. So the lack of decent drivers is probably not their biggest problem.
MeeGo has a lot better graphics framework, but we're ways of before we'll see anyone shipping devices running it and Nokia in the past did "awesome" things like release GNU/Linux devices with a GPU but without any drivers for it (n800, n810) and Intel's Poulsbo/Moorestown graphics drivers woes are well-known. On the desktop side it seems that Intel folks are afraid that porting their drivers to Gallium will destabilize them. Which is certainly true, but the benefits of doing so (multiple state trackers, cleaner driver code, being able to use Gallium debugging tools like trace/replay/rbug and nicely abstracting the api code from drivers) would be well worth it and hugely beneficial to everyone.
As it stands we have an excellent framework in Gallium3D but not a lot of open drivers for it. Ironically it's our new software driver, llvmpipe, or more precisely a mutation of it, which has the potential to fix some of our GPU issues in the future. With the continues generalization of GPUs my hope is that all we'll need is DRM code (memory management, kernel modesetting, command submission) and LLVM->GPU code generator. It's not exactly a trivial amount of code by any stretch of the imagination but smaller than what we'd need right now and once it would be done for one or two GPUs it would certainly become a lot simpler. Plus GPGPU will eventually make the latter part mandatory anyway. Having that would get us a working driver right away and after that we could play with texture sampling and vertex paths (which will likely stay as dedicated units for a while) as optimizations.
Of course it would be even better if a company shipping millions of devices with GNU/Linux wanted a working graphics stack from bottom up (memory management, kernel modesetting, Gallium drivers with multiple state trackers, optimized for whatever 2D framework they use) but that would make sense. "Is that bad?" you ask. Oh, it's terrible because GNU/Linux graphics stack on all those shipped devices is apparently meant to defy logic.
21 comments:
"On the desktop side it seems that Intel folks are afraid that porting their drivers to Gallium will destabilize them."
Haha, good one!
Writing GPU drivers is not harder than writing (or in the nvidia case reverse engineering) network protocols. And there's millions of examples for doing that successfully.
GPUs aren't hard, you guys just like to pretend they are so the rest of the world thinks you're even more awesome than you already are.
Well, millions of examples seems too much to me. On network protocols, in the case of Samba, it took years to get where they are now. Only recently can they do full Active Directory and only in the experimental branche.
@Benjamin Otte: haha, right, like the reverse engineered Microsoft Exchange protocol which iirc has been released in 1996 and it's really rocking the Free Software world in 2010, or Samba which iirc has been initially started in 1991. So yea, you're right it's trivial, it just takes 20 or 30 years to do. On that schedule the R300 driver will be awesome in 2022 or so, you might get one on ebay now because by then museums will have a hard time parting with them.
It is frustrating and not acceptable, the future is made without closed source. What can normal users do right now?
> What can normal users do right now?
Stop purchasing Nvidia hardware.
Haha, that is not possible. Nvidia provides the most capable drivers at the moment, let's face it. I understand that Intel provides open source drivers for the integrated chips and ATI helps to work on an open source driver...but as for now these are less good alternatives.
osfight.de:
If you are not willing to let go of "the most capable drivers at the moment" to support free and open source graphics drivers, thats an entirely valid position to take.
But do not mistake this to mean that it is "not possible" to not buy and use nvidia hardware on GNU/Linux. In the order of several hundred thousand people proves you wrong on that point.
Zack:
Thank you for your hard work. I hope more will be able to join you, and drive progress forward.
I rather wanna know how to support an evolutionary open source process without abandoning most of my hardware privileges.
Wireless bungee jumping is actually not that hard. The hard part is becoming real good at it.
makes it sound as if processor makers like to design and build the hardware of the future today, to be run by the driver code of the future, in the future.
Shouldn't the circuit logic manufacture be harder to accomplish than the code it runs?
I'd be curious to hear some more detailed thoughts from you about Skia.
take the wires you're not using for bungee jumping and tie the crocs' jaws shut. problem solved,
Wait someone is actually working on R300? Last I looked into it ati was releasing specs for everything newer than it and the drivers were completely terrible. Oh, I don't know how to support multisampling, lets do it in software!
To be fair, the author of Samba is pretty much a genius, not only at programming and software design but he also has degrees in mathematics and theoretical physics, and has a singular drive and determination to solve problems. If graphics drivers had taken his interest rather than network protocols, you can be open source graphics drivers would be a lot further along than they are now.
Skia worst than Cairo? I don't agree. With skia you can choose between integer or float math, there is code optimized for ARM and the c++ api it is better than cairo mm.
@last Anonymous: I'm sorry if what I said about Skia sounded like an opinion - it wasn't. I don't really care which api you prefer. The argument for integer and arm optimizations is absolutely ridicules because if in 2010 what you expect from a graphics api is "well optimized for CPU, not even touching GPU" then you already lost as those numerous HTML5 benchmarks comparing IE9 (D2D), Firefox (Cairo) and Google Chrome (Skia) show.
Hi what OpenVG implementation does meego support ?
ShivaVG or ?
Small hint in case you want to try something more relaxing in your spare time:
The jaws of crocodiles are optimized for shutting fast+powerful. They are very weak for opening. A small rope sufficies to DoS a crocodile. Brain over matter. :-)
What does the crocodile story mean?
When will Zack post again?
Someone - a Google employee seemingly - has recently added a GPU backend to Skia:
code.google.com/p/skia/source/browse/trunk/gpu/src
Perhaps this is the way to go for Android? Adding separate back-ends for the CPU and GPU, so that optimizations for each rendering path can be used, without affecting the other. Thus keeping support for the cheaper CPU-based rendering and the better looking, but more (dollar) expensive (because of GPU driver licenses), GPU-based rendering?
Post a Comment