Community
Participate
Working Groups
build I20061030-1704 When the user do code assist in a java editor the JDT/Core completion engine is called two time by JDT/Text java completion computers. One time by JavaTypeCompletionProposalComputer which request CompletionProposal.TYPE_REF. One time by JavaNoTypeCompletionProposalComputer which request all other kinds (CompletionProposal.METHOD_REF, CompletionProposal.FIELD_REF, ...) So a part of the time is spend to compute the same information. The problem could be worse if some other plugins define computers that also use the completion engine. We should improve the code assist behavior to avoid duplicate computing.
This problem could be fixed in JDT/Text but it would require lot of rework of the JDT/Text completion computers API and would require to rewrite all computers to benefit from the improvement. It could be fixed also in JDT/Core by caching computed information of a first call of the completion engine and use this cached information in the next calls of the engine. This approach has the advantage to not require a lot of API change and the performance benefits can be available to all existing completion computers that call completion engine. However this would require a lot of change inside the completion engine. It seems currenlty more interesting to fix the problem in JDT/Core.
I do some cpu profiling to know the potential benefit of this kind of improvement in a pure eclipse SDK install. This profiling is not very accurate but it allow to appraise the problem. If we complete at a location where types and methods/fields can be proposed, about 8% of the completion time is a duplicate effort. If we complete at a location where only methods/fields can be proposed, about 49% of the completion time is a duplicate effort but the full execution time is very short in this case. These results are explained by the fact that the type inference is expensive and the type inference is done only for JavaTypeCompletionProposalComputer. The most interesting case to improve is when types are proposed but in this case the gain would be less than 8%. If there is other completion computers than JDT/Text computers the gain could be bigger especially if these computers request TYPE_REF.
>This problem could be fixed in JDT/Text but it would require lot of rework of >the JDT/Text completion computers API and would require to rewrite all >computers to benefit from the improvement. Important to mention here is that the contributed completion computers are not under the control of the Eclipse SDK and hence there won't be much benefit until we ping the clients and force them to rewrite their code.
Please also check the improvements when running the JDT Text content assist performance test: org.eclipse.jdt.text.tests.performance.OpenJavaContentAssistTest
Did you get a chance to test the numbers with the test I provided?
*** Bug 164449 has been marked as a duplicate of this bug. ***
I profiled the OpenJavaContentAssistTest. The gain would be less than 40% of the time spend in completion engine. But the test case in OpenJavaContentAssistTest compute only methods and fields proposals. If you move the cursor by 2 (" l|ineText" instead of "| lineText") in the test case then types proposals are possible and the potential gain would be less than 20%. In this test case the workspace contains only source of swt. If the workspace contains more types then the gain would be smaller. All these gain are maximum theoretic gain, currently i wrote no fix or prototype.
>If the workspace >contains more types then the gain would be smaller. I guess it should read "would be bigger", right?
No, you should read 'smaller'. With a pure Eclipse SDK install, types proposals are computed only one time in JavaTypeCompletionProposalComputer. So keep the context wouldn't optimize the case of type inference. It would be different if JavaNoTypeCompletionProposalComputer would need also to infer types. Infer types in JavaNoTypeCompletionProposalComputer could be necessary for the fix of the bug 6930, but this is not the case currently.
Ah OK. I thought you had code for bug bug 6930 in place when testing.
Discussed with Text. Though a nice suggestion, this is low priority work.
David, just for your info: I'm going to fix the known double invocation (see bug 164449) when using content assist out of the box.
As bug 164449 is fixed we do not plan to fix this bug.
Verified for 3.5M3