One of the most difficult problems of computer graphics today is producing truly robust algorithms. Working in floating point coordinate system introduces a lot of subtle problems. Computational geometry approaches the problem from two directions:
1) Implementing algorithms on top of rational numbers.
2) Introducing rounding phases to algorithm, usually in a post processing stage.
For example lets look at a pretty popular example, based on the one from Hobby's paper. In this example we have three line segments with one intersection between them. Now if we round up the intersection to integer coordinates two new intersections are introduced - ones that are completely incorrect. Although we don't round up that heavily the following example scales into infinity as you uniformly divide coordinates by the same number.
There are two main reasons why this behavior is so undesirable for graphics.
1) It produces obviously incorrect results.
2) Optimized algorithms often break down due to those inconsistencies.
The first one is actually less important - on this scale the degeneration of our results is likely to produce errors that won't have too significant effects on what you see on the screen. Two is a lot more severe. For example a lot of the more popular geometrical algorithms includes a step of finding segments containing a point. Assuming that a segment has floating point coordinates and the point is, as well, in floating point coordinates - the containment test will very quickly break down as the limited precision will hit the computations. So there's no way to reliable perform that test, due to which all those algorithms will simple fail. As a result, an incredibly small rounding error will likely cause a complete failure of an algorithm and because of that small error you won't see any results on the screen. That's a serious problem.
Going back to the solutions to this problem. John Hobby wrote a paper entitled "Practical Segment Intersection with Finite Precision Output". Algorithm which he introduced in that paper has since been named "snap rounding". Numerous papers have been written since about this algorithm, some of the better ones include "Iterated Snap Rounding" by Halperin and Packer and "Snap Rounding Line Segments Efficiently in Two and Three Dimensions" by Goodrich and Guibas. The basic principle of this algorithm is very simple; every event point is computed within some tolerance square. Every tolerance square has its middle computed and all events that we encounter, whose coordinates fall withing already existing tolerance square are "bent" in a way that changes their coordinate to be equal to the middle of the tolerance square. This technique works surprisingly well.
The second method, often employed by professional grade geometrical software, involves replacing float/double by a custom "rational" number class. They produce mathematically correct results, which is ideal. The reason why we can't really even consider them is that they're about 30+ times slower than doubles.
Now the reason I went on this rant is that I've been testing results of my new algorithm for rasterizing polygons, which involves decomposition, and even though on paper the algorithm was always correct and the unit tests were all passing, the algorithm was falling apart in some cases. I've spent a while going through my intersection detection code, snap rounding implementation and couldn't find anything. Then I looked at some of the data files I had and they were only in limited precision of .001 which is not even close enough to being precise enough for floating point intersections/coordinate systems. Feel my pain =)