Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [e4-dev] Declarative UI

Angelo,

Angelo zerr wrote:

Thank you for your explanation. I'm agree with you to have model (like EMF)
to describe UI. With this mean, we are not linked to SWT.

I think we must be close enough to SWT to map easily to it, and support SWT-specific extensions. Most of the basic SWT widgets may be modelled in a way that is easy to map to all relevant toolkits. However, other UI objects, like the SWT layouts, are difficult to abstract away from SWT, without requiring a lot of custom binding/glue code. Hence, these should be definable in an SWT-specific model.

To avoid that, we could have renderer commons API and Model bindings
works with the commons renderer API.
This solution is :

Model <-> Renderer Commons API <-> SWT
                             <-> Swing
                             <-> GWT
                             <-> Visual editor....

It's important that commons renderer API must be bindable. Layout,
Widgets properties must be bindable (all properties
must be observable). So when layout change, it must fire event to
observe the layout changed.

This is what I meant by "live".

Commons API is UFace. UFace provide API where all properties are
bindable with JFace Databinding.
I'm managing int TK-UI the Declarative UI with DOM (XUL, XHTML..) and
declarative Binding with XAML expression (or EL...)
but as UFace is bindable, it should be easy to use EMF Model instead
of DOM Model.

I discussed this with Tom at ESE, and I agree that UFace technically could have this role. UFace contains a lot of relevant and important code for implementing the rendering logic, i.e. controlling the life-cycle of toolkit objects and managing binding. However, consider the state at runtime if UFace as a whole is used. You will have
EObject <-binding-> UBean <-binding-> SWT widget, instead of the simpler
EObject <-binding-> SWT widget.
Although UFace contains a lot of relevant and important code, there really is no need for the UBean stuff, when EMF is already used, as UBean in effect is a duplication of EMF (as used in this context). Hence, having the whole of UFace in between an EMF model and e.g. SWT is a waste. BTW, correct me if my understanding of UFace is wrong.

At ESE, Tom seemed to agree that (the functionality of) UFace (as implemented now) overlaps with an EMF-based UI model, instead of (just) closing the gap between it and the toolkit. I argued that the UFace code should be merged with whatever EMF model that we decided to use, rather than contributing an additional layer. I understand that UFace wants to be independent of EMF, to make it easier to support some platforms (e.g. J2ME or GWT), but if we decide to introduce a UI model, I think we need to consider carefully how to utilize UFace, to avoid extra code and data.

I have nothing against UFace, but I'm worried that the easiest and cleanest way to combine an EMF UI model with UFace, as you outline, will result in something that is not acceptable by those wanting a lean platform. Perhaps UFace may be split into UCore and UBean, to support two usages. EMF + UCore when EMF is supported and UBean + UCore when EMF is not supported.

Hallvard


Back to the top