If you look at the frameworks we have and the way 2D graphics is heading, it's obvious everything we render is really a polygon. Rendering any other primitives is more an optimization than anything else. There's a few reasons for that. One of the more important ones is the complexity that is introduced with stroking (I'd crack a joke about difficulties of stroking here but lets skip that). For example lets look at the rectangle underneath. We're using a linear gradient stroke on it:
What should be obvious is that it's actually two shapes. We have the fill but we also have the stroke . Here be dragons, because the stroke is not a rectangle anymore! It's a polygon with hole! So rendering this seemingly simple shape suddenly becomes rather complex. Now a lot of people will notice that a simple hack could be to draw two rectangles
as in : first and then with the ending result being . Then of course you need to special case because if the fill has an alpha you will end up with which is cleary incorrect as now the fill seems to have a gradient... Further more if we're overlaying shapes that optimization is right out. So on this extremely simple example we can see that special cases are accumulating very quickly and the code gets really convoluted very quickly. The question becomes whether making the code so ugly and error prone (especially as the number of conditions which affect what kind of rendering path the code takes, accumulates) is a good trade of for small optimizations.
Without the support for stroking and alpha-blending, things tend to be easy. Unfortunately the reality of the fact is that both are hard requirements for any decent rendering framework. So for all of you who've been wondering why the new rendering frameworks are slower than the code you were using so far to draw rectangles and lines, now you know. In Qt we try to make people happy by having all those special cases, by reverting back to using Xlib whenever possible and, of course, by improving the frameworks but it's just a lot of work. I'll try to go over some of the things I'm doing with graphics in Qt, KDE and X.
2 comments:
Looking at my screen I would say the most common thing we render are quadratic and cubic curves. Every glyph on the screen is composed from multiple curves. Check out “Beyond the pixel: towards infinite resolution textures”
Think in terms of infinite resolution vectors that are rasterized in the pixel shaders. You can always write a shader emulator for hardware that is missing them. Shaders can really change the way you think about graphics.
There is a architectural flaw in the current Cairo/X model with compositing. Antialiasing is done in the app but the app does not have information about the final window placement. If the compositor does any transformation to the window other than a straight copy the
antialiasing gets messed up. Transformed antialiasing looks really bad.
That is why Avalon is using the two tree model. Everything is vectors or untransformed bitmaps in the trees. Antialiasing is applied only when the final transformation is known. The Avalon trees use the MVC model. You can put a video stream into a tree and then play it to three different places using a viewer. Each of the views is antialiased individually. There are other ways to achieve the same effect without using two trees.
Many desktops using 3D compositing (MS, Sun LG) turn the desktop into a 3D box that you look into. This mean that four windows are turned into parallelograms on each of the screen sides. In our model the antialiasing on those windows is really messed up.
Actually virtually all the curves that you see are decomposed into polygons. Only very, very primitive ones are left as they are. Curves are point sampled and decomposed into polygons. Paths, which we're basically dealing with when talking about vector graphics, are always polygons. All rendering frameworks operate this way because for example adding a curve, a rectangle and a triangle to a path doesn't equal rendering a curve, then a rectangle, then a triangle; it equals rendering a custom primitive, which of course is sampled into a polygon.
Now, knowing that, there are certain optimizations that you can do if you know for certain what kind of elements are in a path. But those techniques require also a lot of special casing. Note that the papers on vtm's and hardware independent curve rendering never deal explicitly with stroke on the rendered elements. Fill is easy :) I've done some work on shader based stroking algorithms but they are still lacking.
As to shaders, yes, I agree. I'll start writing a lot more about shaders soon since for a few months my plan was to lay a complete rendering framework on top of GPU (note here, that all the papers on curve/polygon rendering on GPU currently require an intermediate step on the client that involves some kind of decomposition of the rendered primitive, so they're not fully GPU bound). But again, once you start working extensively on those things, you notice a lot more problems than initially predicted.
And as to proper antialiasing, it's not that hard to fix, composition manager can syndicate paint events on dirty windows (transformed ones), all we need to do is get the toolkits to render onto windows with predefined viewports. And yes, I have been slacking doing some research lately and haven't attended a lot of those framework problems but I'm hoping to solve a lot of those problems over this summer.
Post a Comment