Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [wtp-dev] JSDT JS parser, internal representation, type inference and all that jazz

inlined

On 7 Apr 2016, at 3:00, Eugene Melekhov wrote:

Hi all,

I've been working on accommodation of the JSDT core to ES6, new esprima parser etc. Right now I'm working on restoring
so called "bindings" used for code assistance etc.


Thank you for working on it, it is important for our Neon delivery.

I will finish the job that I've started and will implement initial bindings resolver for single source but... I must admit
that in my opinion what we're doing at the moment is wrong.


I do not see it as a long term solution either, I think we have reached the limits of nashorn based solution. We either need a way to keep all this on JS world and integrate on the UI level, which is what Angelo does on tern.java/ts.java.
Or we need to go with a full on Java solution.

Since you have brought it up I have also been looking at Closure compiler and I agree it is a strong candidate for a full-on-java solution. So far it looks OK, but I have not done any testing with large JS projects, which was the real
pain with the old JSDT Compiler.

Another option that I am investigating is Node4J[1]. This is the sister implementation of J2V8, it provides JNI based bindings into node.js from Java. It is too early to tell if it is a good option, TBH I am not even sure where
the project is at the moment.

[1] http://www.infoq.com/presentations/node4j-nodejs-java

We should also consider our due diligence as an eclipse hosted project. During the Neon cycle, we have lost a lot of time, between organizing our teams and waiting for eclipse IP process and we effectively got started around Neon M5. In order to avoid that my goal is to start the IP process for closure compiler so that it is available as an option if we choose to do so. Same for Node4J as soon as I can figure out what to submit. :)

* We're trying to reuse old dom model inherited from JDK which is not fully compliant with JS reality even after
 addition of ES6 specific constructs.

* This model is not very convenient for internal and especially external use.


Correct, we have already retired the internal AST model, I am hoping to completely remove it after Neon. Unfortunately abandoning DOM AST is not as easy as internal AST even with a “can break APIs policy” because jsdt.ui uses DOM AST model. Unless we have the enough contributors to change all that functionality, we need to keep the DOM AST working no matter
how the javascript is parsed.

We can probably start pruning and shaping DOM AST to a more efficient JavaScript model, which is something I have not
started doing yet.


* We have to convert this model to fro JSON/other AST formats.

* Integration with JS modules like esprima or Tern or anything alike is resource consuming, complex, opaque, error prone
 etc.


Actually several tools out there have already shown this approach can work. I am concerned about the efficiency of the
mechanics of integration but the general principle can work.


So, what is the right way in my opinion? The answer is to completely abandon or internal representation (ASTDom) model, esprima parser and any other JS based tools like Tern and switch to using Google closure compiler infrastructure instead.

This approach has the following advantages:

* Closure compiler is fast

* It's written in Java and there are no problem with integration.

* It's a mature project with vast and VERY active community

* It has it's own perfect internal representation (IR) and it's possible to use just AST(rhino trees) if necessary https://github.com/google/closure-compiler/tree/master/src/com/google/javascript/rhino

* IR has great traversal infrastructure
https://github.com/google/closure-compiler/blob/master/src/com/google/javascript/jscomp/NodeTraversal.java

* IR and a pluggable compiler allows to perform all tasks that we need. IR can be used for for outline view, symbol tables for code assist, search, navigation, auto completion, refactoring and so on.

* It has call graphs, type flow graphs, type inference out of the box.
https://github.com/google/closure-compiler/blob/master/src/com/google/javascript/jscomp/TypeInference.java

* It has code validation, lint etc
https://github.com/google/closure-compiler/tree/master/src/com/google/javascript/jscomp/lint

* It has code printer
https://github.com/google/closure-compiler/blob/master/test/com/google/javascript/jscomp/CodePrinterTest.java

* It's possible to use it's IR to perform any other necessary tasks like more elaborate type analysis that TAJS does.



In conclusion, I believe JSDT should switch to Google closure compiler back-end as soon as it possible before it's too late. It would allow to forget about almost all low-level problems and concentrate on implementing functionality that is
visible and is really necessary for JSDT users:


I agree, I think this is the main principal, unfortunately the progress had been slower than I was expecting but we should get better now that the committers/contributors are settling down and getting more efficient.

* Fast modern editor

* Elaborate code assistance

* Smart navigation

* Smart refactoring

* Integration of v8, node, npm, gulp, babel etc

* Debugging

* JSX support

* and so on, you name it


Thank you.


--
Eugene Melekhov

_______________________________________________
wtp-dev mailing list
wtp-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/wtp-dev


Back to the top