Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[gef3d-dev] Changes in GEF3D API

Hi everyone,

I just merged and committed an experimental branch we were working on to the trunk. In this branch, Jens and I reimplemented the picking mechanism in GEF3D and cleaned up everything that had to do with coordinate systems and coordinate conversion. Since I am running a little low on time due to my thesis, I will just sum up the most significant changes:

1. Picking

Up until now, we used a technique called color picking (similar to OpenGL selection mode) to determine which figure was at a given set of mouse coordinates. This technique worked well, but has the major flaw that you can only find the frontmost figure at any given location. If you want to know what's behind that figure, you have to re-render the color and depth buffers, and with that comes a big performance penalty. The problem here is that GEF allows you to limit your search for a figure at a given location using an instance of TreeSearch. For example, if you want to know whether there's a handle at a certain location, you'd ignore everything but the handle layer.

To support this in GEF3D, we had to rewrite the picking mechanism. We did away with the color picking and now use the geometric data of the figures and shapes to find intersections with a picking ray. This works very well, has the desired flexibility when it comes to limiting a search and is very fast even in larger diagrams. There is only one downside, and that is that every figure must now implement the interface org.eclipse.draw3d.picking.Pickable and its method

public float getDistance(Query i_query);

This method basically calculates the point of intersection between a ray (stored in the given query) and the figure itself. This puts a burden on framework users because if they want to use their own figures, they have to implement this method, which requires a good understanding of 3D math and such. To alleviate this, we will expand our library of basic shapes, which are all pickable, and we will make it easier for figures to simply be composed of a number of basic shapes by introducing a ShapeFigure which delegates all rendering to one or more shapes. This figure will also delegate the intersection detection to the shapes, which will all implement this efficiently.

But even if someone cannot use Shapes and needs to write their own rendering code, they will find that Math3D contains a number of very useful functions for intersection detection, most notably rayIntersectsPolygon, which does most of the heavy lifting.

In addition to this, we plan won writing a figure that can load polygon models and display them in GEF3D, which would make it possible for framework users to create their figures in a 3D editor like Maya and then use them directly in GEF3D without any coding at all!

We hope that these tools and the improved performance and flexibility that the new picking method gives us will make up for the added burden of possible writing 3D math. But as we said, we will do what we can to keep the math away from the users ;-)

2. Coordinate systems

With the new picking, we have introduced a trio of coordinate systems that GEF3D uses, including simple methods of converting between them. All this will be explained in a Wiki article I'm writing here:

http://wiki.eclipse.org/GEF3D_Coordinate_Systems

The article is a work in progress and some of the information in there is already out of date, but I will try to update it tomorrow. The most important change we made is that the event dispatcher does not dispatch mouse coordinates anymore. The mouse coordinates received from the canvas are instantly converted into surface coordinates that are relative to a so called "current surface". These are 2D coordinates relative to the "surface" of the figure that was last hit by the picking ray (except for a couple of figures whose surfaces we ignore). So whenever any GEF 2D code receives coordinates, these are surface coordinates. We did this to sandbox the 2D code in GEF3D and to simulate a 2D environment to every piece of code that shouldn't know about 3D. A result of this technique is that we can reuse the selection and creation tools from GEF without any change at all! We can even use the 2D creation tool to create 3D figures.

Anyway, all of this will be explained more fully in that article, and chances are that you won't need to know much about it anyway.

To conclude, we did all these changes in order to lay the groundwork for the next item on our todo list, which is proper rotation of figures. We want 3D figures to be rotatable, which doesn't work right now. In order to be able to do this nicely and flexibly, we had to introduce coherent metaphors for coordinate systems and we had to fix the picking mechanism, so that's why we put so much work into this. I will be working on the rotation stuff next because I'll need it for my diploma thesis.

Best regards
Kristian


Back to the top