tag:blogger.com,1999:blog-27901662.post7439618457006823326..comments2024-03-08T10:08:22.607+00:00Comments on Zack Rusin: 2D musingsZackhttp://www.blogger.com/profile/16222054590923441165noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-27901662.post-91564575649227745942011-05-19T09:12:47.430+01:002011-05-19T09:12:47.430+01:00Quite honestly, I'm surprised this isn't a...Quite honestly, I'm surprised this isn't a lot more common. I guess old APIs are hard to break XD. Just imagine the optimizations we can have over the next decade- if we really tackle this, Qt can become an extraordinary platform for enough the most dainty mobile hardware. I'm excited for how things will change.<br /><br />Sometimes I feel like open source toolkits just kick way too much ass for how little they're noticed. Perhaps when most of the modern world has been using overpowered computers for the past five years, it's hard to remember the small, important things. Doing well because you can, not because you have to.<br /><br />I really love your blog, by the way- you've helped me, someone who loves graphics but is horrible with the terminology and mechanics of it all, to understand more about how we can improve one of the most essential and dynamic parts of our software.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-27901662.post-76610631789420733522011-03-12T16:10:33.941+00:002011-03-12T16:10:33.941+00:00What do you think about wayland? Does it solve thi...What do you think about wayland? Does it solve this? Maybe it should be fixed before it's (again) too late..Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-27901662.post-34473656696634646762010-12-08T13:19:40.336+00:002010-12-08T13:19:40.336+00:00Hey, Zack, how can I be as awesome as you?
The mos...Hey, Zack, how can I be as awesome as you?<br />The most i can do is write bash/python scripts and program a little in C#/C++<br />Well, I'm a Linux user which makes me slightly awesome, but it's not enough.<br />You should write a guide :p<br />Great work, man.<br />I wish I had enough knowledge and skills to be able to write drivers. Especially ones as complicated as graphic driversAsmageddonhttps://www.blogger.com/profile/08789270635590959844noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-52832820256367660482010-11-15T23:25:45.806+00:002010-11-15T23:25:45.806+00:00Very interesting discussions going on, and I may e...Very interesting discussions going on, and I may even remember some of the things I've learned reading it!<br /><br />Since this is a post on 2D rendering and toolkits in general, I was wondering what your (and reader's) thoughts are on Morphic? Morphic is an object oriented 2D graphics system which was originally built as part of Self, and has since been ported to Smalltalk (eg. Squeak), SVG+Javascript (Lively Kernel) and even Qt (the experimental "Lively for Qt" project).<br /><br />As far as I understand it, Morphic is essentially a very elaborate way to organise the primitive shapes you want to draw, and thus is very much the old-school mentioned in your post; however, applications live at such a high level, and communicate only via late-bound message sending, that they are very much the "object model" way of doing things (especially since Smalltalk first formally defined objects ;).<br /><br />Also of relevance is Juan Vuletich's "Morphic 3" project, which is trying to disconnect Morphic's 2D graphics from the pixels, screen, resolution and even from the coordinate system; then rendering it at whichever zoom level is desired using sampling theory. Sounds rather ninja-like if you ask me... :)Warbohttps://www.blogger.com/profile/11167936627543971536noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-21135631797469940882010-11-15T15:40:16.102+00:002010-11-15T15:40:16.102+00:00Dude, I need someone that doesn't give me supp...Dude, I need someone that doesn't give me support but that tells me "Read that", first time I see a graphics ninja. Can you help me?<br /><br />I'm at iampowerslave which is a hotmail e-mail if you understand me.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-27901662.post-46188891517727943692010-11-11T17:33:12.061+00:002010-11-11T17:33:12.061+00:00@Anonymous: No, you don't. QGraphicsView uses ...@Anonymous: No, you don't. QGraphicsView uses the old model.Zackhttps://www.blogger.com/profile/16222054590923441165noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-4472090458687463192010-11-09T21:21:06.272+00:002010-11-09T21:21:06.272+00:00We already accelerate in webkit the same things we...We already accelerate in webkit the same things we accelerate for QML, i.e. the scene graph for animations and transforms. See, for example, http://labs.qt.nokia.com/2010/05/17/qtwebkit-now-accelerates-css-animations-3d-transforms/.<br /><br />So, both Webkit and QML make pretty good use of the GPU, and the difference is more about productivity than about hardware acceleration.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-27901662.post-71677424300428498072010-11-04T17:10:45.741+00:002010-11-04T17:10:45.741+00:00In fact current mainline Evas OpenGL backend doesn...In fact current mainline Evas OpenGL backend doesn't do that anymore, but the previous one was building DisplayList to reduce rebuilding stuff from one frame to another. I did add that a few years ago. It was an improvement over not using them, but when the new OpenGL backend written by raster came in, it was much faster than the previous one without this kind of improvements. Don't know what raster think about adding back this kind of trick, but once you have a state full canvas it's really easy to add the needed logic.<br /> <br />So the biggest issue is to get people to use object model instead of direct rendering. Hope adoption will get faster now that all modern framework are going in that direction.Cedric BAILhttps://www.blogger.com/profile/15318577796814764243noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-45450179290187124962010-11-04T14:52:04.289+00:002010-11-04T14:52:04.289+00:00@Daniel Albuschat: we're already doing that. O...@Daniel Albuschat: we're already doing that. On two levels in fact. Composition introduces one backing pixmap which is being simply blitted on window moves and such, and another is used by the toolkits (in Qt it's in QWindowSurface) to blit locally. As to the other comment being valid, no, it's not, we've been doing it for years.<br /><br />@raster: when I'm talking about minimizing state changes, I'm not talking about per-frame, as I mentioned that can be done in the old model (and we've been doing that too), I'm talking about per-lifetime of the application. For example once you initialize a button the four coordinates should never be uploaded again, there should be a permanent buffer object which simply is binded whenever it's being rendered. Furthermore things like moves should be minimized to just sending translation in x and y floats along a newly bound vertex shader (or a full matrix with a generic mapping vertex shader, which would still come up to a lot less data than translation on the client side and resending coordinates for every primitive). No one is doing that at moment and that's really what I'm talking about.<br /><br />I completely agree about getting people of the old model being difficult. With HTML5 apps becoming more and more common at least we're getting away from the old model and towards declarative "think in terms of objects" which is good.<br /><br />(btw, your response to Daniel isn't correct, Qt and iirc GTK+ are doing that, effectively we do triple buffering when composition manager is running, Kristian and I talked about it a long time ago and we didn't see a way around it, topic for another blog maybe)<br /><br />@Will: interesting, thanks.Zackhttps://www.blogger.com/profile/16222054590923441165noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-35334889483871288942010-11-04T08:30:35.441+00:002010-11-04T08:30:35.441+00:00@ Daniel Albuschat ...
yes - the problem is how y...@ Daniel Albuschat ...<br /><br />yes - the problem is how you expose painting TO the app or even the widgets themselves. but... requiring EVERYTHING (every button, every list item, ...) to be a buffer will mean you'll have no memory left very quickly. you can't do that. you need to re-paint. you just have to move the painting out of the view of apps and put it even well below the toolkit/widget set. deal in objects (a rectangle, an image, a text string etc.) and just manipulate them, stack them and change their properties. let state management figure out how to re-draw such changes.<br /><br />the fact that in "windows 7 you have previews" is simply a bi-product of forcing all windows to render not to the fb, but to backing pixmaps. this happens in x11 when you use a compositor too. it just happens to consume quite prodigious amounts of memory. you'll go through dozens of mb before you know it - and 100's of mb are easily used up.<br /><br />in the end if there is a buffer or not should be transparent to the app - or a toolkit. it should be down at a lower layer. it should manage that and how to redraw, what to redraw and how to minimize that, if that is needed.Unknownhttps://www.blogger.com/profile/10972324070309916970noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-69366283388541922572010-11-04T08:25:00.236+00:002010-11-04T08:25:00.236+00:00@JonathanWilson ... look at EFL (Evas and friends)...@JonathanWilson ... look at EFL (Evas and friends) - it does just what you ask. It intorduces a new model (scene graph), BUT does it with multilpe render targets. default is an optimised software engine perfectly capable of realtime display even with all the fancy bits on. you don't NEED OpenGL acceleration for it to work well. In addition there is an OpenGL rendering engine (just select which you want at runtime) that can do all the same rendering of 2D scene graph elements, but using your GPU and its drivers. so it provides a forward-moving path allowing for gl to be used when/if the drivers are solid and you have the hardware, and software to be used otherwise (or many other rendering targets supported too).Unknownhttps://www.blogger.com/profile/10972324070309916970noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-87793050264861881402010-11-04T08:15:33.849+00:002010-11-04T08:15:33.849+00:00This is a tangent but I just wanted to describe th...This is a tangent but I just wanted to describe the bit of history I was involved in:<br /><br />Symbian OS had a reactive drawing window-server - when a part of the screen needed to be drawn, the client that had that bit of screen was woken up to draw. Very Windows-like.<br /><br />It worked ok, low RAM - there was a screen-buffer, usually not even double-buffered, so it worked.<br /><br />The key thing was that the draw commands were serialised and sent over IPC to the window server to do the actual drawing, where it could enforce clipping.<br /><br />This worked horrid when semi-transparent windows were added.<br /><br />So a 'redraw store' was added to the server so that it would store all the draw commands for windows in a buffer and 'replay' them server-side when it needed to redraw part of a window.<br /><br />The initial implementation was horrid and my team put a lot of effort into speeding it up, but the concept was sound and a big step forward.<br /><br />We had dreams of translating the primatives we got from clients to openvg or display-lists - openvg seemed promising, then it faded away, then it seemed to become viable again, but we never jumped before UIQ evaporated.<br /><br />The implementation was always complicated by the old APIs for 'direct screen access' and 'getpixel()', which needed to be supported even when UIQ was very careful not to use them.Willhttps://www.blogger.com/profile/10665856385463640225noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-19870154936083458952010-11-04T08:04:57.281+00:002010-11-04T08:04:57.281+00:00actually the evas gl engine does batch and minimiz...actually the evas gl engine does batch and minimize state changes. it does it quite aggressively. it has a number of parallel geometry pipes it keeps going and as long as you wont have incorrect-ordering when rendering, it will batch newer draws with old ones that match the same state. it by default maintains 32 pipes (unless you are on specific gpu's where it actually doesn't help - in these it keeps only 1 around). you can have up to 128 (a recompile can increase this limit - it's a #define, but runtime you can set EVAS_GL_PIPES_MAX to the maximum number of parallel pipes to maintain at a time). if you disable the pipes - or well bring it to a single pipe, then one scene (drawing icons with labels) results in 92 flushes (92 gldrawarrays) per frame. with it on even at the default this goes down to 4 per frame. instead of 92. it certainly does work.<br /><br />but ymmv depending on driver/gpu/platform/ i've tested across quite a few - fglrx on radeon hd 4650 saw a massive speedup - like 200-300% framerate increase from memory, cedric tells me on his eee he sees a 30% speedup. on an sgx540 i've seen a good 30% speedup - WHEN it actually finds an optimal path (and my tests show its pretty good at doing so), nvidia tegra 2 shows no speedups at all and on recent nvidia desktop GPU's (GT220 for example) it's no win, on older (8600 GTS) it's a big win - 75% speedup.<br /><br />so yes - evas does do this state change minimization quite well :)<br /><br />as for who came first - i'm getting at the "use opengl for regular 2D rendering, but to do so effectively oyu need to change rendering model from immediate-mode drawbox, drawline etc. to more of a scene graph as WELL as abstract the rendering to the point where you can just slide in opengl etc." and that's something Qt has only started doing in very recent times. :).<br /><br />that hard bit is getting people to use a new model and break away from the immediate-mode mindset and codebases.Unknownhttps://www.blogger.com/profile/10972324070309916970noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-40998188463525564722010-11-04T06:45:22.553+00:002010-11-04T06:45:22.553+00:00Sorry, I did not read all comments, so maybe this ...Sorry, I did not read all comments, so maybe this has been covered.<br />The CORE problem is not any hardware architecture or anything like that at all. The core problem is that current APIs (like Windows GDI, Qt's QWidget::paintEvent() and such) are doing it the way they are doing it: Repaint every time when the window is invalidated. This is plain wrong - a (partial) repaint should be made when the state of the window changes, not when it needs to be (re-)displayed on the screen. It should always be the case that you are not painting to the screen, but to a buffer, that will then be blitted to the screen whenever the window (or parts of it) needs to be repainted. And this buffer does not need re-invalidation e.g. when the application is minimized and then restored. This is really the fault of the APIs, not of GPUs or other hardware architectures. Actually, Windows did a few steps in this direction. Windows 7's drawing mechanics have undergone a complete re-write to make the small previews in the taskbar's tooltip and the alt-tab dialog possible. Windows are not drawing to a screen, but to a buffer, that will then be re-used (e.g. for scaling into the alt-tab dialog) by the Windows Desktop.<br />And by the way, I totally don't get your response to Anonymous (talking about using OpenGL for 2D). His point is perfectly valid.<br />I think you are blaming the wrong folks.<br />CAPTCHA: snesup. makes me want to play mario kart.Daniel Albuschathttps://www.blogger.com/profile/16902431458961243153noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-68147915950446312552010-11-04T00:17:42.048+00:002010-11-04T00:17:42.048+00:00@Flying Frog Consultancy Ltd.: me: "this is w...@Flying Frog Consultancy Ltd.: me: "this is what we're doing on gnu/linux, in particular in kde, this is why it's broken, this is how we're fixing it", you: "Pff, we did it, but different and not in kde, and it's a secret". Good stuff!Zackhttps://www.blogger.com/profile/16222054590923441165noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-33856813029629948752010-11-03T21:30:06.962+00:002010-11-03T21:30:06.962+00:00@Zack: The code is only available under commercial...@Zack: The code is only available under commercial license, of course, but as you're being so friendly I'm happy to tell you more about the history of our product line. The original version was 50,000 lines of C++ code that could zoom and pan around the Postscript Tiger full screen at 100fps on an nVidia RIVA 128 with a 300MHz Intel Pentium. A colleague integrated OCaml support which became our declarative language of choice and, ultimately, led to the entire code base being rewritten in OCaml. That system was used to build various custom demos for companies. For example, <a href="http://www.wired.com/science/discoveries/multimedia/2004/06/63744?slide=4&slideView=4" rel="nofollow">High Energy Magic's SpotCode product</a> had an interactive shop front demo that users could control using camera phones. That demo was an Acme travel agent where the user could fly around a map of the world to book flights. We also advised Wolfram Research before they released <a href="http://bp0.blogger.com/_5Z1NzpeMhcs/RmYsvLbcJBI/AAAAAAAAAL0/XfrKvTW0GNY/s1600-h/mathematica_v6_screenshot.png" rel="nofollow">their own solution</a> in Mathematica 6 (2007). Today, we still use the same code (albeit translated from OCaml to F#) to power our current products such as <a href="http://www.ffconsultancy.com/products/fsharp_for_visualization/" rel="nofollow">F# for Visualization</a> and we have used it to build custom presentations for customers such as Microsoft.Jon Harrophttps://www.blogger.com/profile/11059316496121100950noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-5125707736898840662010-11-03T20:28:19.064+00:002010-11-03T20:28:19.064+00:00@Flying Frog Consultancy Ltd.: Good stuff. Of cour...@Flying Frog Consultancy Ltd.: Good stuff. Of course I'd point out that since this is computers and not magic we're talking about instead of posting bitter comments you could simply show people the code, the benchmarks and the amazing interfaces you've created to prove how much better you were in 1997, but hey, this is more fun, ain't it?Zackhttps://www.blogger.com/profile/16222054590923441165noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-72924102981364822792010-11-03T17:45:45.244+00:002010-11-03T17:45:45.244+00:00@Zack: Yes, we were using a different declarative ...@Zack: Yes, we were using a different declarative language (not QML) when we did this in 1997, of course.Jon Harrophttps://www.blogger.com/profile/11059316496121100950noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-45455498661478337872010-11-03T14:49:42.367+00:002010-11-03T14:49:42.367+00:00@vtorri: Comment above explains your comment.
@ra...@vtorri: Comment above explains your comment.<br /><br />@raster: Yes, and qgraphicsview is technically a scene-graph as well, as was qcanvas as well, as was many, many other projects. Declarative languages were there before as well, it's not like HTML with javascript was never used. So if we're using "came before" as "based on my code" then surely Evas is based on QCanvas and Edje is based on Qt UI (you know the stuff Qt Designer was generating for close to 10 years now) :) Also not to critique Evas but afaik you don't minimize state changes, you just batch them and flush them all together in shader_array_flush.<br /><br />@Flying Frog Consultancy Ltd: this comment would be irrelevant even in 1997. It's not scene-graph by itself, and it's not QML by itself, it's the combination of both that makes it what it is. I thought I made that clear. There's lots of scene-graphs and lots of declarative languages - they both matter. It's not that people don't like you, it's that you didn't give them a compelling reason to switch to your product, accelerated vector graphics libs are everywhere.Zackhttps://www.blogger.com/profile/16222054590923441165noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-56302369508399843502010-11-03T14:31:18.516+00:002010-11-03T14:31:18.516+00:00Is it possible that this is to do with backward co...Is it possible that this is to do with backward compatibility so that low end graphics cards/chips still work; much in the same way VGA is still the default when all else fails and/or until the OS has loaded more advanced drivers, X and so on.<br /><br />To be honest I don't think I've felt any system to be "slugish" except when loading from disk or a high (CPU) demand job is running in the background such as a blender render or a process has gone rogue due to a bug.<br /><br />Then again my first computer was based on a Motorola 6502 processor and 32k of ram and could run defender as well as any purpose built arcade box.JonathanWilsonnoreply@blogger.comtag:blogger.com,1999:blog-27901662.post-50149646097592840202010-11-03T12:09:48.423+00:002010-11-03T12:09:48.423+00:00The fact that people seem to chuckle and move on, ...The fact that people seem to chuckle and move on, or have some form of a "wtf" reaction whenever I mention NeWS in a positive manner bothers me.<br />It was far superior to X, but lost because it was non-free, and incredibly expensive.<br />Look to NeWS for ideas, yo.<br />"The NeWS Book" is available cheap on amazon, and is full of good ideas and insight on 2D rendering, particularly of vector graphics. The specifics of the rendering, implementation language etc can be kind of glossed-over, as can the sections on programming in PS, though the concepts behind the NeWS API in the programming sections are worth reading.<br /><br /><br />As a side note,<br />I frankly don't see why we need SVG et al, when extending PS to support layers would have been enough. Perfect.<br />Oh, and there are actual 100% feature-complete implementations.<br />I'm not saying SVG is bad! It's very good, I just don't see the need for another format for people to spend time and effort implementing and has to be installed on every system that wants to view or otherwise process said format(s).<br />Duplication of effort frustrates me when it's unnecessary.<br /><br />That said, I hope other 3D APIs come to X with Gallium3D drivers. I hear GL's not the easiest to implement properly, and not the easiest 3D api to use. The fact that it assumes a C-family language, or a comfort with that programming style doesn't appeal to me, is all I know personally.<br />I don't have alot of expectation of that, but I have hope.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-27901662.post-5235606451175471902010-11-03T12:01:51.241+00:002010-11-03T12:01:51.241+00:00This post would have been awesome in 1997 but, as ...This post would have been awesome in 1997 but, as others have already explained, these problems have long since been solved. Even Microsoft's Windows Presentation Foundation has provided this kind of functionality as standard for over 4 years. <a href="http://www.ffconsultancy.com/products/smoke_vector_graphics/" rel="nofollow">Our own libraries</a> have been providing much more advanced functionality for over 13 years.Jon Harrophttps://www.blogger.com/profile/11059316496121100950noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-70849616764785758442010-11-03T05:49:26.818+00:002010-11-03T05:49:26.818+00:00actually evas is a scene graph. not saying that kd...actually evas is a scene graph. not saying that kde should change from qt to efl - but wml and qscenegraph is much newer than evas or edje for example. evas first started doing its thing back in 2001. with opengl and software engines to boot. scene graph with multiple abstracted rendering pipelines since then. edje is the "ui in data file loaded/interpreted at runtime" model like qml - built on to pof.. a scene graph... which is evas :) unless i totally mis-understand qt... :)Unknownhttps://www.blogger.com/profile/10972324070309916970noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-40237803676186159832010-11-03T05:47:57.212+00:002010-11-03T05:47:57.212+00:00"For some weird reason ("neglect" b..."For some weird reason ("neglect" being one of them) 2D rendering model hasn't evolved at all in the last few years."<br /><br />You seem to forget that Edje (from the Enlightenment project) has already had these ideas implemented for years, and was used in QEdje, which was actually the start of QML.Unknownhttps://www.blogger.com/profile/14848431950197507623noreply@blogger.comtag:blogger.com,1999:blog-27901662.post-71187908511870280632010-11-03T01:28:18.586+00:002010-11-03T01:28:18.586+00:00@Sami Kyöstilä: yea, batching is certainly an issu...@Sami Kyöstilä: yea, batching is certainly an issue, but as I mentioned you can do a lot of batching in the old model (that was subpoint #2 in the list I posted). Of course knowing ahead of the time what exactly will be rendered makes batching a lot simpler. I agree with your points about the creation pipeline but it looks like the Nokia guys are looking at multiple projects there (Qt creator and exporters for Photoshop and Gimp) so hopefully that will be fixed soonish. <br /><br />@Anonymous: Yea, things like VNC would have to have something like llvmpipe running. Either that or we could always fallback to the old model in those cases.<br /><br />@raster: Not really. You'd be probably right saying that JavaFX had a lot of impact on it, but not EFL. In fact QML as it stands uses the old model (qgraphicsview) it's the new scene-graph (link in the blog) that makes it interesting from the graphics perspective. For the scene-graph it's game engines and proper usage of GL are bigger muses than anything else. Of course if you wanted you could implement QML on top of EFL or more specifically Evas but for us (us as in KDE/Qt) writing scene-graph is a better option.<br /><br />@Anonymous: Yea, definitely. For now if you have working GL (and if not llvmpipe is always an option) I'd suggest cloning the Qt scene-graph (link in the blog) and give it a shot. There are some examples in the examples directory. See how far it is from your ideal and if you have any suggestions the Qt bugtracker (even though it's not as user friendly as it could) is waiting :)Zackhttps://www.blogger.com/profile/16222054590923441165noreply@blogger.com