Community
Participate
Working Groups
In the attached test case, the Eclipse compiler (3.3M3) does not report an ambiguity, whereas the Sun javac compiler fails to compile the test with: > javac -version A.java B.java C.java javac 1.5.0_10 C.java:3: reference to remove is ambiguous, both method remove(A<?>) in B and method remove(C.D) in C match remove(new D()); ^ 1 error
Created attachment 54782 [details] Test case
JDKs 1.5.0_11, 6_01_b03 and 7_b06 all complain the same. We don't.
Added test cases AmbiguousMethodTest#51 to 53. Javac complains upon 51 and 53, which we don't.
For clarity, copying test051 here: interface I<T> { } class Y { void bar(I<?> x) { } } public class X extends Y { void foo() { bar(new Z()); // a } void bar(Z x) { } private static class Z implements I { } } My reading of JLS3 yields the following: - by 15.12.2.2, both X#bar and Y#bar are applicable to the call on line a; accordingly, we should pick the most specific method of those, if any; - by 15.12.2.5, because I#RAW does not <: I<?> and I<?> does not <: Z, X#bar is not more specific than Y#bar, and Y#bar is not more specific than X#bar; accordingly, they are both maximally specific; moreover, they do not have override-equivalent signatures; hence none is the most specific method, and we should report an ambiguous call. (For the fact that I#RAW does not <: I<?>, it comes from 4.8 'The superclasses (respectively, superinterfaces) of a raw type are the erasures of the superclasses (superinterfaces) of any of its parameterized invocations.', according which the superinterfaces of I#RAW would be limited to |I|. I<?> <: I#RAW, but that does not help here.) Philippe, what do you think?
There is something about RAW types... take this example: interface I<T> {} class X { void foo() { bar(new Z()); } void bar(Z x) {} void bar(I<?> x) {} // void bar(I<Object> x) {} // <S> void bar(I<S> x) {} } class Z implements I {} It does not matter which implementation of bar(I...) is used, javac reports an ambiguity as long as Z implements I#RAW. But as soon as Z implements I<Object>, then bar(Z) is a match AND its also a match if the other method is void bar(I x) {} Its the same if Z extends A vs. A<Object>
Definitely, and that's what I suggested in comment 4. If you consider: interface I<T> {} class X { void bar(Z x) { System.out.println("bar(Z)"); } void bar(I<?> x) { System.out.println("bar(I)"); } public static void main(String args[]) { (new X()).bar(new Z()); } } class Z implements I<Object> {} then Z <: I<?> and X#bar(Z) is the most specific method. If Z implements I instead of I<Object>, then we no more have Z <: I<?> and we should get an ambiguity (that javac reports). Released additional test cases AmbiguousMethodTest#54 (active) and 55 (inactive) to track your examples.
Actually I don't think I agree that this should result in an ambiguous call. Start with : interface I {} class Y { void bar(I x) {} } class X extends Y { public static void main(String[] s) { new X().bar(new Z()); } void bar(Z x) {System.out.print(1);} } class Z implements I {} All compilers agree that X.bar(Z) is the best match. Now start to convert I into a generic type: interface I<T> {} class Y { void bar(I<T> x) {} } and leave alone (assume these types are in a different project): class X extends Y { public static void main(String[] s) { new X().bar(new Z()); } void bar(Z x) {System.out.print(1);} } class Z implements I {} Backwards compatibility is supposed to allow users to update their code starting with superclasses and moving down, but we're prevented in this case. And besides, bar(Z) will ALWAYS be the best match regardless of the definition of I or Z.
(In reply to comment #7) > Backwards compatibility is supposed to allow users to update their code > starting with superclasses and moving down, but we're prevented in this case. I am not saying that javac behavior is desireable... I am just trying to figure out if this is consistent with the spec. Besides, if we used the following definition for Y, I am not sure of the value of electing X#bar as the most specific method: class Y { void bar(I<? extends WhateverConcreteClass> x) { } }
I think the spec is missing this case. I don't disagree with your description of the spec. And as for this specific method invocation "bar(new Z())", there can never be a better match than X.bar(Z). Its irrelevant how Y.bar is defined. That's why I have a problem with this case being spec'ed as ambiguous.
(In reply to comment #9) > I think the spec is missing this case. I don't disagree with your description > of the spec. k > And as for this specific method invocation "bar(new Z())", there can never be a > better match than X.bar(Z). It would be difficult to disagree. It is not the only case where the progressive generification is broken, though, and by far. For example, given the client lib fragment: class X implements I {} class Z extends X implements J {} replacing the non generic lib fragment: interface I {} interface J extends I {} by a generified version: interface I<T> {} interface J extends I<String> {} makes Z erroneous (cannot implement I and I<String>). [I believe that the library migration issue is currently better addressed on a binary standpoint (that is, what happens when a client compiled against the non-generic version N of a lib runs against the generic version N+P of the same lib) than it is at the source level, despite the efforts made to address the latter.] So the question would be: do we follow the spec (and javac BTW), or do we take an informed decision not to do it in that case?
Yes that is the question. I agree that users that develop in eclipse & also compile using javac want us to emulate their 'bugs' as best we can, but in this case when there can never be a better match than X.bar(Z) - I think we should continue as is. Any other opinions ?
I am tempted to think that the raw conversion is what causes this issue...
Current situtation is that: - the spec and javac are aligned with one another; - we do differently; - Kent provided strong arguments in support of our keeping our current behavior. The question remains: do we follow the spec (and javac BTW), or do we take an informed decision not to do it in that case?
I am having a similar problem. My problem relates to calls to super(), where the call is ambiguous due to generics. Consider the classes: public class SuperClass<T> { public SuperClass(String _arg1) { System.out.println("String"); } public SuperClass(T _arg1) { System.out.println("T"); } } class SubClass<T> extends SuperClass<T> { public SubClass(String _arg1) { super(_arg1); } public SubClass(T _arg1) { super(_arg1); } } When I invoke "new SubClass<String>("xyz")", which constructor is invoked? javac takes the position that this is ambiguous, and will actually fail with a "reference to SuperClass is ambiguous..." message. It's hard to say, but it looks like Eclipse resolves this with "Last Match Wins". That is, it makes a list of both possible matches, and chooses the "last", based on the order the constructors appear in the class. This is not intuitive to me! However, I also don't think there exists an intuitive solution to this, and that's why I feel javac is right here in just not allowing this kind of call. A related example: class StringOnlySubClass extends SuperClass<String> { public StringOnlySubClass(String _arg1) { super(_arg1); } } In this case, 3.2 appears to use "First Match Wins", and 3.1 uses "Last Match Wins". Could this be added as a warning, if not an error?
Adam - I added bug 188960 to deal with your case.
Philippe was correct in comment #12 bug 206930 covers more cases than this one so I'll tag this one as the dup *** This bug has been marked as a duplicate of bug 206930 ***
Verified for 3.5M6 using I20090309-0100.