Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[xtext-dev] Xtext CA

Hi,

I'd like to discuss content assist (CA for short) a bit.

Currently, CA heavily relies upon the semantic model to be present and up-to-date, which turns out to be a major problem due to the fact that updating the semantic model is a rather time-consuming task. Even though I am confident we'll be able to improve overall performance for parsing and updating the semantic model, I am inclined to think that using the semantic model for CA might not be the right approach. Let me elaborate a bit:

First, as Sebastian already pointed out, the document will be in a semi-invalid state during editing, thus making it difficult to build a valid semantic model. Even if we do a full parse after realizing we cannot perform a partial parse, the semantic model is likely to be highly inaccurate in the region we're most interested in: the current caret position.

Second, doing a full parse on large models is prohibitive for a feature that basically needs instantaneous feedback. What's more, doing a full parse is so slow at the moment that parse request start piling up in the reconciler job queue (at first I thought that merging the document change events in the reconciler didn't work correctly, but in fact it's the parser that is too slow to process a single request before a new one comes in).

So, how about this idea:

When receiving a document change event, we try to find out the model region that gets invalid by this change and either revoke this part from the semantic model or mark it as invalid. This might be the easy part. Next, we'd need to provide a "best guess" for the semantic structure surrounding the caret. To achieve this, we'd need to scan the text prefix of the caret back to the position that is known to be the last valid location in the model. If we were dealing with only one, well-known language I'd assume we could perform this scan quite efficiently (the JDT probably works like this). However, we need to be able to deal with unknown languages and thus cannot hand-code the respective code. In a first approximation, we could decide to deliver the text prefix to the CA algorithm and let the DSL developer create a suitable scanning routine. This is quite similar to what you do when you manually implement CA in JFace Text editors. However, as we're able to hand in a partially valid semantic model, the DSL developer can exploit this information to enrich the proposals he creates.

Let me hear what you think about this.

Peter



Back to the top