Summary: | Overloaded method requires clients to add plug-in dependency | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Eclipse Project] JDT | Reporter: | Dani Megert <daniel_megert> | ||||||
Component: | Core | Assignee: | JDT-Core-Inbox <jdt-core-inbox> | ||||||
Status: | NEW --- | QA Contact: | |||||||
Severity: | major | ||||||||
Priority: | P3 | CC: | bokowski, darin.eclipse, gerd, gunnar, igor, jarthana, jbraeuchi, jcompagner, john.arthorne, kazm, manoj.palat, markus.kell.r, mkappeller, mypurchase, Olivier_Thomann, sasikanth.bharadwaj, srikanth_sankaran, stephan.herrmann, wassim.melhem | ||||||
Version: | 3.1.2 | Keywords: | helpwanted | ||||||
Target Milestone: | --- | ||||||||
Hardware: | All | ||||||||
OS: | All | ||||||||
See Also: | https://bugs.eclipse.org/bugs/show_bug.cgi?id=533586 | ||||||||
Whiteboard: | stalebug | ||||||||
Bug Depends on: | 110176, 73957, 502829 | ||||||||
Bug Blocks: | 13221, 148057 | ||||||||
Attachments: |
|
Description
Dani Megert
2006-06-27 12:31:07 EDT
Created attachment 45392 [details]
Java only test case
NOTE: not even swapping the parameters helps: as soon as the compiler sees the same method name with the new type it wants that type on the build path. This is at least a major bug because it makes the PDE Tooling useless: PDE's Organize Manifest... tool allows to clean up a manifest by checking the dependencies using Java search and then removing plug-ins from which no types or methods are used. This would work just fine except that the result is a flood of errors on my workspace caused by this bug. Here's a nice example: load 'org.eclipse.ltk.ui.refactoring' from CVS and clean its manifest: among other changes it will remove 'org.eclipse.core.filesystem' - which is correct, BUT: because RefactoringHistoryManager.readRefactoringDescriptorProxies(...) is overloaded (with a private!!!) method that uses IFileStore we end up with compile errors. This appears to be the same issue as in bug 117804. Not sure if we can convince Kent this time ;-) Dani, this is probably not what you want, but you could avoid the issue by renaming the private method readRefactoringDescriptorProxies. Correct me if I'm wrong, but I don't think the language spec talks about the case where you try to compile a class and not all of its direct and indirect dependencies are available. Note that dependencies could be both API or implementation dependencies. This means that potentially, fixing this could lead to cases where plug-ins can only be compiled with the Eclipse compiler but not with javac. Or is this already the case? How about a warning for cases of overloading where not all overloaded methods have the same visibility? Boris is right. Resiliance to incomplete classpath is unspecified, and therefore we cannot talk about a bug here, at most this is a courtesy of some compiler if they perform in this mode. As a consequence, if we handle this scenario, then clients may be forced into using our compiler only which could be an issue for some. To summarize, we are going to investigate being nicer, and more predictable (currently there is some ordering issue), but we must remain reasonable. The compiler has to check various scenarii (access validation, selecting most specific method incarnation) and these require to resolve more than one candidate. I think the core issue remains that you are exposing yourself to problem when overloading some protocols (same selector and arg count); and engineering your code differently is a better approach, since it will not tie your code with one specific compiler implementation (ie. if we change our compiler, you will need releng to use the new version from now on to process your code, and not be able to compile with a different vendor or earlier version of our compiler)... >Dani, this is probably not what you want, but you could avoid the issue by >renaming the private method readRefactoringDescriptorProxies. The problem/error normally surfaces on the client side i.e. the client normally has no power to go and rename methods in other plug-ins. >How about a warning for cases of overloading where not all overloaded methods >have the same visibility? This won't help much as the problem surfaces independent of visibility: as soon as you overload a method the compiler seems to want to see all signature types. (In reply to comment #7) > The problem/error normally surfaces on the client side i.e. the client normally > has no power to go and rename methods in other plug-ins. Good point, this reminds me of a discussion I had with Kent about a better error message than "indirectly referenced" - since this error occurs when client code is compiled, it is very hard for them to track down why the heck that particular class needs to be on the classpath. Yes I know it could be a long chain of references, but just reporting the n-1th reference (e.g. in our example the method readRefactoringDescriptorProxies) would be very helpful already. Another idea - a little radical perhaps: Since this is not a problem at load time / runtime, it would be possible to provide a JAR with just the API stubs for purposes of compiling client plug-ins, while using the complete JAR only at runtime. Think "definition modules", or (gasp) "header files". :-) (In reply to comment #3) > This is at least a major bug because it makes the PDE Tooling useless: PDE's > Organize Manifest... tool allows to clean up a manifest by checking the > dependencies using Java search and then removing plug-ins from which no types > or methods are used. This would work just fine except that the result is a > flood of errors on my workspace caused by this bug. I do agree with Dani that this is a real problem, especially given that Jeff recently recommended running the Organize Manifest... tool regularly, and that the resulting errors are very hard to understand due to the uspecific error message. It's not just overloaded methods: bug 154781 shows the same problem implementing a new interface which comes from a newly added plug-in: now, because the class implements that interface all clients using that class are forced to add this bundle to their MANIFEST.MF, otherwise they get compile errors, i.e. source compatibility gets broken as soon as someone implements or extends a new type coming from a newly added dependency. (In reply to comment #10) > It's not just overloaded methods: bug 154781 shows the same problem > implementing a new interface which comes from a newly added plug-in: now, > because the class implements that interface all clients using that class are > forced to add this bundle to their MANIFEST.MF, otherwise they get compile > errors, i.e. source compatibility gets broken as soon as someone implements or > extends a new type coming from a newly added dependency. This might be a stupid question, but shouldn't the PDE classpath container add the new plug-in to the build path but hide all its classes using access rules until the plug-in is referenced explicitly? This should make the classes available to the compiler but not to the user. Are you suggesting that PDE always automatically adds all required but not listed plug-ins and sets up the access rules for them? If not, how do you expect PDE to detect the special cases/plug-ins? (In reply to comment #12) > Are you suggesting that PDE always automatically adds all required but not > listed plug-ins and sets up the access rules for them? IMHO the PDE classpath container should mimic the runtime behavior as close as possible. At runtime this works because we have bundle specific class loaders. But the compiler only knows about the project build path the type is resolved in and not about context specific build paths. That's why this is only a workspace problem and not a problem at runtime. The only option seems to be making the indirectly referenced bundles available to the compile but "not visible" to the client. Addition to comment 10: the problem in client code appears not only for the references to the changed type but also for those that reference any subtype (see bug 154781 for a scenario). I guess a fix for bug 73957 would resolve this issue too. Bug 148057 can also not be fixed in a straight forward manner due to this bug, i.e. instead of adding a new variant of connect(...) to file buffers I now have to call it connect2(...) or connectFileStore(...) - not a very object-oriented way. >I guess a fix for bug 73957 would resolve this issue too. Not unless we can flag those non-explicit Java build path entries to be only used by the compiler. Wassim, how about adding the required entries to the Java build path with a different severity (like invisible) so that it cannot be seen and reached by the user? Also discussed with Philippe. He will try to make the compiler to be more resilient in cases where we have an exact match already, e.g. in bug 148057 we have this case: - there is an API IFileBufferManager.connect(IPath, LocationKind, IProgressM...). - a client calls myManager.connect(IPath, LocationKind, null). - I add IFileBufferManager.connect(IFileStore, IProgressM...) ==> since the new method differs in the arg count and since there is a perfect match I would like the compiler to no complain. Re: comment 18. This is only a workaround and nice to have. This doesn't address the general situation, and recently we came across situations where exact matches would still need to be treated as errors by the compiler (i.e. need to match it against other possible matches to flag ambiguities, e.g. bug 166355); i.e. if the compiler was resilient, it could still result in a situation where code would seem to compile fine, but as soon as you added the other missing pieces on the classpath, then the story would fail again (which would be scary). Adding more forbidden material on the classpath is a better approach, and will still allow all tooling to perform nicely (you can still search for refs even in forbidden code, just cannot refer to it). To clarify, what Dani would need is a mode where the PDE container would augment the buildpath with all the entries which would be on the runtime path, but are not on buildpath, and treat these extra entries as forbidden (i.e. for system purpose). I'm also in favor for the solution with the augmented build path but we have to make sure to use some new kind of 'forbidden' as the current one can be ignored by the user and also chosen to appear in code assist. By default, codeassist is hiding forbidden proposals. I don't think we should introduce yet another option here. These are really forbidden. But there's a difference between a) forbidden because the package is not in the Export-Package: of a manifest or because it's from rt.jar and starts with "com.", and b) forbidden because it is not on the build path but available at run time There can be situations where it's OK to set the severity of (a) to less than Error. Violations of (b) should never be allowed, since they can lead to exceptions if a required plug-in changes its (non-exported) required plug-ins. I agree that we don't need another user-configurable option, but we should make sure the user cannot relax (b). (In reply to comment #23) OK, this distinction might not be that important in practice (if users want to screw up, it's their problem...). We could always add the distinction later if necessary; but this discussion should not delay the feature, IMO. As I point out in bug 164266, PDE would consider stuffing the classpath to make the type hierarchy complete and the jdt tooling happy, but I am hesitant because of two real issues: several reasons: 1. forbidden severity is customizable, which is not good. classes from indirect dependencies must never appear in content assist. 2. Indirect dependencies will appear as legitimate things on the classpath as far as the UI is concerned (ie. in Package explorer or Java build path properties page). This is misleading, since these corresponding plug-ins are not on the runtime classpath. So if there is a new mechanism that is stronger than 'forbidden access' and if jdt/ui can filter out the classpath entries that are used as stuffing, then we can solve many issues regarding 'indirect references to classes' including bug 73957. I am also afraid that stuffing the classpath with these entries may forever mask real bugs in the compiler process (if any). *** Bug 164266 has been marked as a duplicate of this bug. *** Re: comment 25 I am not sure an extra access rule level would be useful here (say Forb'hidden). Imagine end user having 2 projects. P1 with forb'hidden jar on its buildpath. P2 with no jar on its buildpath. In both cases, package explorer would look exactly the same, but still he would get different compiler error messages when referencing an offending type in each project. This would be misleading. If users decide forbidden material becomes ok, then how different is that from letting him hack his plugin manifest ? Are you going to prevent him from doing that ? To clarify myself. Forb'hidden refers to point 2 in comment 25, where these new kind of access rules should be made invisible in the UI. Hope is that bug 110176 will fix this. Functionality similar to that discussed here seems to have regressed from 3.4 to 3.5 (this is pasted from bug 73957, comment 67) I've got a large OSGi-based system, and some problems popped up in 3.5 that weren't present in 3.4. The real-world situation is as follows: * Bundle A imports org.osgi.service.component * Bundle A does NOT import org.osgi.framework * Class in bundle A calls ComponentContext.locateService(String name) * Class in bundle A does NOT call ComponentContext.locateService(String name, ServiceReference ref) This worked fine in Eclipse 3.4. In 3.5, however, the call to ComponentContext.locateService(String name) gives me the error: "The type org.osgi.framework.ServiceReference cannot be resolved. It is indirectly referenced from required .class files" I dug into this a bit deeper and found some stuff. There seems to be a case when a method exists that is overloaded in the following manner: * method1(AvailableType param) * method1(AvailableType param, UnavailableType param2) Assuming AvailableType is visible to my bundle, but UnavailableType isn't, calls to the first method will fail in Eclipse 3.5 but succeed in Eclipse 3.4. Another scenario: * method2(int param) * method2(int param, UnavailableType param2) A call to "method(123)" will work in both 3.5 and 3.4 And finally: * method3(Object param) * method3(Object param, UnavailableType param2) A call to "method3(anyObject)" will fail in both 3.4 and 3.5 (with a same "indirectly referenced from required .class files" message) Please see attachment 137730 [details] for a java-only example. Jeremy this bug here is well understood. If you see something that worked for you in 3.4 and now seems broken the please file a separate bug report. (In reply to comment #31) > Jeremy this bug here is well understood. If you see something that worked for > you in 3.4 and now seems broken the please file a separate bug report. > Thanks Dani, created bug 278745. hi, i have a similar problem (simple java-projects, no plugins) with 3.4 http://www.eclipse.org/newsportal/article.php?id=24563&group=eclipse.tools.jdt#24563 jakob Created attachment 139205 [details] ZIP File containing three projects three projects showing the scenario described in http://www.eclipse.org/newsportal/article.php?id=24563&group=eclipse.tools.jdt#24563 I've encountered this or similar issues in various situations lately, including a debug session where "indirect dependencies" prevented me from evaluating some code snippet. I think it would be best if the compiler would operate in different "modes": [A] Plain Java, everything is on a flat classpath: no need to change the compiler, existing resilience is sufficient / more than needed. All classes live in the same world. [B] OSGi: compilation must only depend on explicitly visible classes. Resolving foreign binary classes should never trigger reading non-exported classes (or non-re-exported). Classes live in separate worlds and compiling one bundle must depend only on those classes that are visible in the current world. [C] Debugging: the developer using a debugger is not part of any bundle, the code (s)he enters could be seen as super-user-mode: in the debugger we see everything, we'd also like to write arbitrary snippets of code dis-regarding any encapsulation. While [C] clearly is a nice-to-have, something along the lines of [B] may be the only feasible solution to really appreciate the modularization a la OSGi at development time. Wouldn't all the ambiguous situations actually be flagged nowadays as exposing a non-API type in the API? I could not think of any legal OSGi code, that would require the compiler to look behind the facade of a required bundle. Technically speaking I think methods with MissingTypes in their signature should simply be considered as unmentionable in the OSGi mode, because at runtime no client will be able to provide argument values for such a method. Still the missing type is not an error, because the OSGi runtime will make sure that the bundle owning this method will know about the class. Put differently, a compiler targeted at OSGi must be able to compile with an incomplete classpath, because classpaths are not really a valid concept for OSGi. Am I missing something? Just noting the dependency on bug 73957 which seems like the same bug reported against PDE. A solution proposed in bug 73957 is to add the transitive closure of all required bundles to the PDE classpath container with access rules to forbid access to those that are not seen as API (only required for runtime). (In reply to comment #36) > Just noting the dependency on bug 73957 which seems like the same bug reported > against PDE. A solution proposed in bug 73957 is to add the transitive closure > of all required bundles to the PDE classpath container with access rules to > forbid access to those that are not seen as API (only required for runtime). Also note, that a flat classpath (even holding all transitively reachable classes) will not truely reflect the OSGi semantics (see Martin on bug 110176 comment 5). Conversely, nested containers seem to be pretty much a dead end given Jerome's evaluation in bug 110176 comment 10. I'd really like to hear comments on my comment 35. I think for the compiler this would be simple and correct. I'm not 100% sure if this would adversely affect the ability to browse required plugins and navigate through transitively reachable code. To be investigated during 3.8 and either fix or close as WONTFIX. I had intended to look into this but didn't yet have the time. OTOH, it would be sad to close it as WONTFIX without real investigation. I'll take a look at this. I have a few similar ones in my plate already. (In reply to comment #0) > This seems to be a plug-in (build path) related issue since a simple Java only > test case (see attached zip file) seems to work just fine. Didn't check when this changed. With 3.8 M5 the attached plain java test cases also fail. Evaluating. (In reply to comment #38) > To be investigated during 3.8 and either fix or close as WONTFIX. I am moving this out to after 3.8 to allow me to focus on some prototyping effort for Java8. Echoing comment# 38, I am also loathe to closing it without investigating fully. (In reply to comment #42) > (In reply to comment #38) > > To be investigated during 3.8 and either fix or close as WONTFIX. > > I am moving this out to after 3.8 to allow me to focus on some > prototyping effort for Java8. Echoing comment# 38, I am also > loathe to closing it without investigating fully. I'm glad you don't give up, but then please don't postpone again later ;-). With latest in JLS 9 EDR plus answers by Alex Buckley on jigsaw-dev I'm beginning to understand that in Java 9 the compiler is obliged to handle even types that are not *accessible* to the module being compiled (where accessible includes the rules of exports and requires from JPMS). Two reasons to think so: - same-named types from different modules are not a name clash if at most one of those types is accessible to any given module - passing a value of an accessible sub type where a parameter is declared using an inaccessible super type is allowed Assume M0 P0 T0 exports P0 to M1 M1 P1 T1 m1(Object) m1(T0) // type not accessible but method still selectable TX extends T0 requires M0 // not transitive!! exports P1 M2 P2 T2 m(T1 t1, TX tx) { t1.m1(tx); } requires M1 Compiler must recognize that the invocation selects m1(T0) despite T0 being inaccessible. For this, compiler needs to load and resolve T0 during compilation of M2, even though no accessibility edge exists. M2 could contain another P1.T0 which must not raise an error. If we implement this for Java 9, couldn't this once and for all resolve this issue not just for Java 9 but also in PDE scenarios? Viz, once the compiler is able to load all classes from transitive dependencies with no danger of bogus name clashes, we can advise PDE to automatically populate the build path with the transitive closure of dependencies. => Back to work in bug 502829... bulk move out of 4.8 Bulk move out of 4.9 This bug hasn't had any activity in quite some time. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. If you have further information on the current state of the bug, please add it. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. -- The automated Eclipse Genie. |