Community
Participate
Working Groups
This is a re-open of 154191. It would be great to have new warnings/errors from the compiler based on annotations. The first aim would be to check for null references. Several "things" can be annotated: - Method parameters - Method return value - Class fields - Local variables The default could be "don't perfom any check, and assume nothing", and there would be @NonNull (cannot be null, please check assignments), @Nullable (can be null, please check every use). Here are some ideas: - JSR 305: Annotations for Software Defect Detection http://jcp.org/en/jsr/detail?id=305 - FindBugs: @CheckForNull, @CheckReturnValue, @NonNull, @Nullable http://findbugs.sourceforge.net/manual/annotations.html - ItelliJ IDEA: @Nullable and @NotNull http://www.jetbrains.com/idea/documentation/howto.html - Nully : plugin for the IntelliJ IDEA https://nully.dev.java.net/ I think it would be awesome :D The interoperability is a minor issue i think since search & replace could do the trick. Even specific eclipse annotations would be really great !!
Something really close have already been done. Not using annotation but tags in comments. This paper talks about it, and explains why making non-null the default is easier: - http://users.encs.concordia.ca/~chalin/papers/2006-003.v3s-pub.pdf The project "JML JDT", became JML4, and then jmleclipse, but the project web site seems dead right now...
We now of Chalin et al. The current thinking is that annotations offer significant leverage on class files, hence have an advantage over comments. Thanks for the pointer anyway.
(In reply to comment #2) > We now of Chalin et al. > The current thinking is that annotations offer significant leverage on class > files, hence have an advantage over comments. > Thanks for the pointer anyway. Ok. And do you think this feature worth it? If not, and if i wanted to try to do it, what things/tips should i know (in the org.eclipse.jdt.core module for example) ? :D
We would expect the feature to augment significantly the value of the current null analysis. In terms of tricks in case you'd want to jump into the code: - we do not consider fields at all so far (the rationale behind this being that we don't want to get in trouble with multi-threading considerations); hence this is probably the last part you'd want to tackle; - marking parameters and locals as definitely null (or non null) would be the easier starting point I believe, focusing first on the effect of that into their scope (as opposed to the method's calling sites - for parameters); - the return type is probably a bit harder.
I've started to handle some cases. Now i'm stuck because in MessageSend.nullStatus(FlowInfo), i can't access to compiler options so i don't know if u can use the annotations. How could i do?
Compiler options will decide whether or not some annotations got parsed/decoded. So if you see annotations, just go ahead, this is because compiler options were positionned correctly (i.e. we don't read/parse them unless compliance is suitable).
(In reply to comment #6) > Compiler options will decide whether or not some annotations got > parsed/decoded. So if you see annotations, just go ahead, this is because > compiler options were positionned correctly (i.e. we don't read/parse them > unless compliance is suitable). Well i'm talking about a new option, "UseNonNullAnnotation", which can have IGNORE, WARNING, OR ERROR values. But you're right, i just need to set some flag in type checking so that i can use it in flow analysis.
Hello, i think i've finished with @NonNull annotation. So far i have added: in CompilerOptions: * OPTION_UseNonNullAnnotation = "org.eclipse.jdt.core.compiler.processNonNullAnnotations" * UseNonNullAnnotation = ASTNode.Bit53L; in messages.properties: * 3 new messages in IProblem: * 3 new constants some code in ast.* After thinking, i think i should one option "Default null-ness" with 3 values: - Assume nothing, and use @NonNull and @Nullable - Assume non-null, ignore @NonNull and use @Nullable - Assume nullable, use @NonNull and ignore @Nullable But i won't add all this if i'm not sure the feature is added to eclipse. So would you prefer what i have now, and wait for the new option, or do you prefer that i submit only 1 big path in 1 or 2 month with everything in it? By the way, i suppose i must provide the junit tests and the ui modifications for the options? Could you tell me what you won't accept, what i must avoid, etc...
I'm a bit confused by the meaning of your option. Is that meant to tell the compiler to ignore the annotations? Or does that command the reporting of errors in specific situations?
(In reply to comment #9) > I'm a bit confused by the meaning of your option. Is that meant to tell the > compiler to ignore the annotations? Or does that command the reporting of > errors in specific situations? Well, if the default is set to non-null, @NonNull annotations can be ignored. The error reporting is another option, the one i've already add, and whose value can be IGNORE, WARNING, or ERROR.
(In reply to comment #10) > (In reply to comment #9) > > I'm a bit confused by the meaning of your option. Is that meant to tell the > > compiler to ignore the annotations? Or does that command the reporting of > > errors in specific situations? > > Well, if the default is set to non-null, @NonNull annotations can be ignored. I'd consider that the option's value proposition is to allow users to minimize the number of annotations calls in their code by setting a reasonable default for non-decorated variables. In other words, this would be less about ignoring the annotations than about pretending there are annotations when none is present. All in all I believe I understand what you mean here. > > The error reporting is another option, the one i've already add, and whose > value can be IGNORE, WARNING, or ERROR. > This is the one I have difficulties to understand. Could you elaborate please?
As there is still no Java standard for the annotation type to use, and to be compatible with sources written for IDEA or FindBugs, I suggest to solve this with a new compiler option where you tell the compiler what the qualified name for each annotation is. For example org.eclipse.jdt.core.compiler.notNullAnnotation=org.eclipse.runtime.NotNull As users might even mix different libraries, this could even be a list: org.eclipse.jdt.core.compiler.notNullAnnotation=org.eclipse.runtime.NotNull;edu.umd.cs.findbugs.annotations.NonNull
(In reply to comment #12) > As there is still no Java standard for the annotation type to use, and to be > compatible with sources written for IDEA or FindBugs, I suggest to solve this > with a new compiler option where you tell the compiler what the qualified name > for each annotation is. > For example > org.eclipse.jdt.core.compiler.notNullAnnotation=org.eclipse.runtime.NotNull > > As users might even mix different libraries, this could even be a list: > org.eclipse.jdt.core.compiler.notNullAnnotation=org.eclipse.runtime.NotNull;edu.umd.cs.findbugs.annotations.NonNull > Yes. We discussed that with Philippe last year and still need to ponder the costs and benefits. The trick is that this may well be *needed* to get the feature to deliver any value. We still had some hope that some current JSRs that look at extending the set of 'well known' annotations (aka @Override etc.) would deliver some in the null analysis realm. This may then all boil down to timing issues.
1/ The option "Use @NonNull and @Nullable" can take the value IGNORE/WARNING/ERROR because i thought the user should able to disable annotation checks. 2/ After even more thinking: * using non-null as the default would mean that all calls to HashMap.get() where the value is checked against null would report a warning "useless check for null-ness", which is a pain. * using nullable as default would mean that all calls to non-null, but not annotated, methods (like JTable.getSelectedColumns()) would report a warning/an error ("value could be null"), which is also a pain. So i don't think the "Default null-ness" option worth it anymore. 3/ Instead of checking for every possible annotations, i suggest a rather simple feature of eclipse : search @ replace. Since no standard is finished, it would be the more simple solution for now. And also, mixing 2 libraries would mean that methods/locals can be tagged with 2 differents annotations for the same purpose ? I hope this won't happen.
(In reply to comment #14) > 1/ The option "Use @NonNull and @Nullable" can take the value > IGNORE/WARNING/ERROR because i thought the user should able to disable > annotation checks. Still don't get it. What do you call 'annotation checks' exactly? Would that be for the following case or for something else? @NonNull Object o; ... o = null; // message if any would depend upon IGNORE/WARNING/ERROR
(In reply to comment #15) > > 1/ The option "Use @NonNull and @Nullable" can take the value > > IGNORE/WARNING/ERROR because i thought the user should able to disable > > annotation checks. > > Still don't get it. What do you call 'annotation checks' exactly? Would that be > for the following case or for something else? > > @NonNull Object o; > ... > o = null; // message if any would depend upon IGNORE/WARNING/ERROR exactly
If you work with existing source code, you don't want to or can't do a Search/Replace. Maybe you are working in a team and the source code is shared in a repository, or you are using annotated code in a library.
(In reply to comment #16) > (In reply to comment #15) > > > > 1/ The option "Use @NonNull and @Nullable" can take the value > > > IGNORE/WARNING/ERROR because i thought the user should able to disable > > > annotation checks. > > > > Still don't get it. What do you call 'annotation checks' exactly? Would that be > > for the following case or for something else? > > > > @NonNull Object o; > > ... > > o = null; // message if any would depend upon IGNORE/WARNING/ERROR > > exactly > I do not really see the need for such a refinement. Using the severity of the 'Null pointer access' option would seem a reasonable choice for me.
(In reply to comment #18) > I do not really see the need for such a refinement. Using the severity of the > 'Null pointer access' option would seem a reasonable choice for me. Okay (In reply to comment #17) > If you work with existing source code, you don't want to or can't do a > Search/Replace. Maybe you are working in a team and the source code is shared > in a repository, or you are using annotated code in a library. true... okay so this one: org.eclipse.jdt.core.compiler.notNullAnnotation=org.eclipse.runtime.NotNull;edu.umd.cs.findbugs.annotations.NonNull
(In reply to comment #19) ... > org.eclipse.jdt.core.compiler.notNullAnnotation=org.eclipse.runtime.NotNull;edu.umd.cs.findbugs.annotations.NonNull You'll want to expose the rationale for the choice of notNullAnnotation over nonNullAnnotation (aligning upon JSR-305 might be the best option). Also, may need to consider a plural (aka nonNullAnnotations). The inclusion of a suitable annotation in Eclipse is yet another topic, hence I'd take the org.eclipse.runtime.NotNull as an example only.
That was just an example to illustrate. I completely agree with your comment; 'nonNullAnnotations' would be a better preference name, 'org.eclipse.runtime.NotNull' is just an example.
I'm stuck on something! Why are the annotations on interface's method's arguments not present in Binding ? Is it on purpose ? Using AST View on this example confirm the behavior: //-- package org; public @interface NonNull { } //-- public class X implements I1, I2 { // in MethodDeclaration (compiler.ast), if i get the binding // of I1.foo(Object, Object), the annotations are not there // why ? public void foo(Object o1, Object o2) { } } //-- public interface I1 { void foo(@org.NonNull Object o1, Object o2); } //-- public interface I2 { void foo(Object o1, @org.NonNull Object o2); }
Missing annotation bindings could be a dup of bug 164660 (see bug 156352 for our test case)
(In reply to comment #22) > I'm stuck on something! Why are the annotations on interface's method's > arguments not present in Binding ? Is it on purpose ? I am not the expert here, but from the existence of MethodBinding#getParameterAnnotations, I would deduce that you are not expected to get parameter annotations for free from method bindings. When analyzing X#foo, you will get a method declaration for it, but you will reach I1#foo (or I2#foo) through a method binding, upon which you would apparently need to ask for the parameter annotations. If you have already tried that, then please let me know.
(In reply to comment #24) > I am not the expert here, but from the existence of > MethodBinding#getParameterAnnotations, I would deduce that you are not expected > to get parameter annotations for free from method bindings. When analyzing > X#foo, you will get a method declaration for it, but you will reach I1#foo (or > I2#foo) through a method binding, upon which you would apparently need to ask > for the parameter annotations. > If you have already tried that, then please let me know. I think that's what i'm doing: - in class org.eclipse.jdt.internal.compiler.ast.MethodDeclaration - in method analyseCode(...) - i use binding.declaringClass - i call declaringClass.superInterfaces() - on each super interface, i call getExactMethod - and then getParameterAnnotations on the returned method binding if i can't get the annotations for free, how can i get them ? I think there isn't enough javadocs in the source code! I can't add the feature without the anotations...
I'm on vacations right now, but I'll investigate further when I'm back (in three weeks from now). Would you mind sharing a preview version of your code?
Created attachment 74851 [details] Preview version of the NonNull annotation patch A review of this preview patch would be great :D
(In reply to comment #27) > Created an attachment (id=74851) [details] > Preview version of the NonNull annotation patch > > A review of this preview patch would be great :D Just to let you know that this patch has "bitrotted" I tried patching the HEAD from 2008-01-04 and got 6 bunch of failed hunks.
Created attachment 86246 [details] Updated NonNull annotation patch to compile against HEAD 2008-01-04 Add an updated patch. This comiles, but has not been further tested.
Created attachment 86247 [details] Updated NonNull annotation patch to compile against HEAD 2008-01-04 Add an updated version that really does compile.
Created attachment 89354 [details] JUnit test for non null annotation feature This is all the tests i've done. Two of them fail. The reason is that i must be misusing ReferenceBinding.getExactmethod(...) because i don't always find annotations of interfaces. If somebody can check the 2 failing tests, it would be great.
(In reply to comment #1) > Something really close have already been done. Not using annotation but tags in > comments. > > This paper talks about it, and explains why making non-null the default is > easier: > - http://users.encs.concordia.ca/~chalin/papers/2006-003.v3s-pub.pdf > > The project "JML JDT", became JML4, and then jmleclipse, but the project web > site seems dead right now... > The JML4 project is alive and well. Two summaries were presented in 2007: "The Architecture of JML4, a Proposed Integrated Verification Environment for JML" http://users.encs.concordia.ca/~chalin/papers/TR-2007-006-v1zr.pdf "An Integrated Verification Environment for JML: Architecture and Early Results" http://users.encs.concordia.ca/~chalin/papers/2007-SAVCBS-Chalin-James-Karabotsos-IVE-for-Jml.pdf A mailing list is available: https://sourceforge.net/mailarchive/forum.php?forum_name=jmlspecs-reloaded
This bug got forgotten for some time, it seems. Yesterday on the JDT/Core call we talked about this and afterwards I wrote a little essay: http://wiki.eclipse.org/JDT_Core/Null_Analysis Only after writing I re-discovered this bug, so I will have to go through the comments in this bug to check if may text needs updating. I hope taking a fresh look at this issue with all its alternatives and implications might give new momentum to it. Please feel free to either correct/augment the wiki text or drop a quick comment in this bug. If we agree an a strategy I'd love to help shipping a solution.
(In reply to comment #33) >[..] We also have a patch here which may have put the initial infrastructure in place. Maybe we should look at it and see where we can go from there, if we decide to take the annotations approach. Changing the version number to 3.7
Can anyone help me understand why this bug/patch stalled? I assume by using configurable annotation types the direction taken in the patch is about the best we can do, given that JSR 305 doesn't look it will deliver any time soon. If everybody was basically happy with the patch I will start playing with it to see what remains to be done.
(In reply to comment #35) > Can anyone help me understand why this bug/patch stalled? > > I assume by using configurable annotation types the direction > taken in the patch is about the best we can do, given that JSR 305 > doesn't look it will deliver any time soon. > > If everybody was basically happy with the patch I will start playing > with it to see what remains to be done. I have a vague memory that there was a dislike of using custom tool-specific annotations and this feature was awaiting the introduction of standard annotations as in JSR 305. I don't know if that thinking has changed though.
I personally think JSR 305 would be the best choice, as FindBugs (and it's plugin for Eclipse) already supports them, people from Klockwork seems to have detectors based on that [1], and also Coverity representatives told us (Verigy) that their engine will support JSR 305 soon. I have no idea why JSR 305 is in such state, but I believe that this is the only one "semi-standard" annotation package around. I personally would not like to have tool specific annotations, neither "javadoc" annotations. The question for me is: is JSR 305 alive? If not, why? How can we help to make it (Java) industry standard? Should we work closely with Bill Pugh (JSR lead & FindBugs father) and drive the JSR forward? [1] http://www.klocwork.com/blog/tag/jsr-305/
(In reply to comment #37) > I personally think JSR 305 would be the best choice, as FindBugs (and it's > plugin for Eclipse) already supports them, people from Klockwork seems to have > detectors based on that [1], and also Coverity representatives told us (Verigy) > that their engine will support JSR 305 soon. > > I have no idea why JSR 305 is in such state, but I believe that this is the > only one "semi-standard" annotation package around. I agree, using JSR 305 would certainly be the primary choice. So, when you say other tools are already based on JSR 305 what document would they be using as the reference? I couldn't even find a draft specification, and the slide deck by Bill Plugh is still quite vague, not even mentioning any fully qualified annotation names. I don't think that any community feedback has gone into it, yet, has it? Unless we have a more precise document I would suggest to actually abstain from using any specific names in the implementation at this point, leaving concrete names as configuration options. Note that claiming to implement JSR 305 will probably still lead to very different interpretations/implementations at this point.
(In reply to comment #38) > Note that claiming to implement JSR 305 will probably still lead to > very different interpretations/implementations at this point. We can certainly never claim that we implement the JSR. That would require both a specification, and a TCK that we were permitted to run. However there is unofficial javadoc available here to find out the proposed annotation names: http://jsr-305.googlecode.com/svn/trunk/javadoc/javax/annotation/package-tree.html
(In reply to comment #39) > (In reply to comment #38) > > Note that claiming to implement JSR 305 will probably still lead to > > very different interpretations/implementations at this point. > > We can certainly never claim that we implement the JSR. That would require both > a specification, and a TCK that we were permitted to run. However there is > unofficial javadoc available here to find out the proposed annotation names: > > http://jsr-305.googlecode.com/svn/trunk/javadoc/javax/annotation/package-tree.html Thanks, that was the missing link exactly! Looking inside the Javadoc this seems to be the newest information from JSR 305 indeed: <!-- Generated by javadoc (build 1.5.0_16) on Tue Feb 03 09:28:21 PST 2009 --> So we have javax.annotation.Nonnull javax.annotation.Nullable // unknown, same as no annotation javax.annotation.CheckForNull // may be null We should indeed use these names, but as they are not a standard I suggest we keep the concept of configurable annotation names and provide the above as defaults, OK? Additionally there are javax.annotation.ParametersAreNullableByDefault javax.annotation.ParametersAreNonnullByDefault where I'm surprised to see these limited to parameters (not return values nor fields) and I'd love to here the reasons for omitting ParametersAreCheckForNullByDefault
(In reply to comment #40) > > http://jsr-305.googlecode.com/svn/trunk/javadoc/javax/annotation/package-tree.html This doesn't give any explanation about what these annotations should be used for besides their names. Only a few actually have any documentation. > javax.annotation.Nonnull > javax.annotation.Nullable // unknown, same as no annotation > javax.annotation.CheckForNull // may be null > We should indeed use these names, but as they are not a standard > I suggest we keep the concept of configurable annotation names > and provide the above as defaults, OK? They should be defined as well known types and see if we have enough bits left to set them. They should be defined in: org.eclipse.jdt.internal.compiler.lookup.TagBits > javax.annotation.ParametersAreNullableByDefault > javax.annotation.ParametersAreNonnullByDefault > where I'm surprised to see these limited to parameters (not return values > nor fields) and I'd love to here the reasons for omitting > ParametersAreCheckForNullByDefault Now this project seems to be completely dead. One message with no answer so far has been posted in September about the status of the project. So even if we could improve the compiler analysis using these annotations, I wonder if this is not a complete waste of time if they never end up inside the standard libraries. Stephan, you might want to investigate if we could make the annotations names as options to the compiler so that we could support the annotations from FindBugs and easily adapt the code to support the official ones if they become official in the future without changing the compiler code.
Created attachment 184335 [details] experiment in configurability After all seem to agree that JSR 305 offers no perspective at this point here's a fresh experiment regarding the organizational issues I think we should settle first: I implemented 3 levels of configurability via new keys in CompilerOptions: (1) Annotation names are configurable via these keys: OPTION_NullableAnnotationNames : comma-separated list of qualified type names that will be recognized as "nullable" annotation. OPTION_NonNullAnnotationNames : comma-separated list of qualified type names that will be recognized as "non-null" annotation. (2) Emulation of the types from (1) even if those types cannot be resolved from the classpath using key OPTION_EmulateNullAnnotationTypes. The effect is that the LookupEnvironment will create bindings for these types withous searching the name environment. (3) Add the types from (1) to the list of default imports for every class using key OPTION_DefaultImportNullAnnotationTypes. This is done in CompilationUnitScope where defaultImports are created and stored in the LookupEnvioronment to be shared by all types. When using all three options we have a strong decoupling of source code and configuration: the source code only needs to 'know' the simple name of the annotations so if you use this configuration: defaultOptions.put(CompilerOptions.OPTION_NullableAnnotationNames, "org.eclipse.jdt.annotation.Nullable"); defaultOptions.put(CompilerOptions.OPTION_NonNullAnnotationNames, "org.eclipse.jdt.annotation.NonNull"); defaultOptions.put(CompilerOptions.OPTION_EmulateNullAnnotationTypes, CompilerOptions.ENABLED); defaultOptions.put(CompilerOptions.OPTION_DefaultImportNullAnnotationTypes, CompilerOptions.ENABLED); the simple names @Nullable and @NonNull can be used without import nor extra entries on the classpath. This has the advantage that in case a project wants to migrate to null-annotations from a different namespace, they need not touch any sources, just changing the configuration (per project or even global) suffices. If a project has actual annotation types in a jar they will just disable OPTION_EmulateNullAnnotationTypes. If a project needs to be compilable with other compilers than the JDT compiler they should also disable OPTION_DefaultImportNullAnnotationTypes, because other compilers will require explicit imports. Is anybody uncomfortable with this level of "magic"? Next steps will include: - make new options available in batch compiler and UI - decide whether we want to support multiple sets of annotation types. This was suggested in comment 12 but I'd personally feel more comfortable with just one set of names per project. - work out consistency among the various options, like, e.g.: what is OPTION_EmulateNullAnnotationTypes, without specifying annotation type names? Will defaults (from the eclipse.jdt namespace) be used? Or are the two boolean options sub-options of (1)? - implement the semantics :)
In order to keep this bug focused I've created three new bugs for advanced issues. This leaves the current bug with - annotations only for method parameters and return -> fields are the subject of bug 331649 - only two annotations -> the intermediate state 'unknown' is achieved by no annotation, cancellation of a default from outer scope will be handled in bug 331647 Additionally, bug 331651 documents the wish to extend the approach to non-annotated libraries. One issue is relevant to the current bug: Should we account for building against different libraries using different annotation types? One option would be to leave this to the profile mechanism proposed in bug 331651, which would enable us to use the simpler model (only one set of annotation names) within the compiler. With this focus an initial useful implementation can be done within the M5 time frame, assuming we can agree on the strategy.
(In reply to comment #43) > [..] > Additionally, bug 331651 documents the wish to extend the approach to > non-annotated libraries. One issue is relevant to the current bug: > Should we account for building against different libraries using different > annotation types? One option would be to leave this to the profile > mechanism proposed in bug 331651, which would enable us to use the simpler > model (only one set of annotation names) within the compiler. I think this sounds reasonable. For the scenario of this bug, we should just try and keep things as straightforward as possible, and make sure we don't start giving off false positives with the new annotations in place. I'll take a look at the patch soon.
Created attachment 185507 [details] proposed tests & implementation Here's a patch that could be considered feature complete for a minimal core solution. It contains the configurability from the previous patch with one change: I no longer support multiple equivalent annotation types, but only one type for nonnull and one type for nullable (see comment 43). The options can be configured programmatically or from the command line (batch compiler). On top of those configurable annotations, four flow analyses are implemented: - parameter null annotations feed their null-status into the analysis of the method body. - method (return) null annotations feed their null-status into the analysis of all call sites. - inside a method annotationed as nonnull all return statements are checked for (potential) contract violations. - all callers of a method with nonnull annotated parameters are checked for (potential) contract violations. In the latter two cases I distinguish between three levels / configurable irritants: - definite contract violation: we know that null is passed to nonnull (default: error) - potential contract violation where we know that some flows will pass null (default: warning) - potential contract violations where we have insufficient information: the compiler can neither prove contract violation nor adherence. This will usually mean that inside a method values from unannotated locations are used. (default: warning) Obviously, a fully specified program must not have diagnostics against any of these irritants. Annotations are recognized from source code as well as from .class files. Annotations of an overridden method are inherited and if a method tries to redefine nullness in an incompatible way, an error is raised. Here compatibility accepts covariant return (@Nullable->@NonNull) and contravariant parameters (@NonNull->@Nullable). These errors are also classified as null contract violations. Dependencies among options (relevant also for the UI): - if emulation or default imports of annotation types are used, the configurable names for these annotations must not be null. CompilerOptions.set(Map) currently ensures this by applying built-in defaults if needed. - emulation and default imports can be configured independently. Conflict between a type name to be emulated and an existing type is reported as a build path problem. Implementation: I could mostly re-use existing infrastructure. One new field was required in MethodBinding to store which parameters are annotated as nonnull. This array will be null for all methods that have no nullness annotations to minimize the impact for code not using the new features. I had to move bindArguments() from resolve() to resolveTypesFor() because the information about null annotations, which is established in bindArguments(), is already needed during MethodVerifier.checkAgainstInheritedMethods() to detect incompatible overriding. All compiler tests pass. At this point I'd like to encourage anybody to browse the new tests and the javadoc (class JavaCore) and/or try the patch. If no significant problems are discovered I'd like to release this patch during the first week of January, and proceed to bug 331647 soon after.
(In reply to comment #45) > Created attachment 185507 [details] [diff] > proposed tests & implementation The patch looks good at first glance. I'll do more testing and delve in deeper to review it.
(In reply to comment #46) > (In reply to comment #45) > > Created attachment 185507 [details] [diff] [details] [diff] > > proposed tests & implementation > > The patch looks good at first glance. I'll do more testing and delve in deeper > to review it. I have a few small comments. The patch overall looks good. To begin with, I have a few questions: 1. I think the intention in CompilationUnitScope.java:313 onwards is to cache only the null annotation imports, but aren't we ending up caching all non-on-demand imports? 2. What are the changes in SourceTypeBinding for, where the method binding is marked as null? 3. Shouldn't there be new options in JavaCore corresponding to the 2 options of configuring emulation and default import? 4. If I dont specify any annotations of my own but use the compiler's default @NonNull, and @Nullable, why do I still need to turn on the emulation option? Besides these, there are a few things that should be done: 1. The new test class NullAnnotationTest, will have to be added to the compiler test suite. in org.eclipse.jdt.core.tests.compiler.regression.TestAll 2. In JavaCore: a. It'll be good to combine all contributions under your name rather than repeating in the header. b. Fullstops have to be put after each list item in the javadoc for the new options. 3. About the warnings: a. I think we dont really need a separate warning for potential null contract violation when there is insufficient info to determine it. In a way, both variants of potential null contract violations (one arising out of a possible null value in the analysis and the other arising out of lack of null annotations), lead to the same problem. I dont think making the error message more wordy to convey why the compiler thinks its a potential null contract violation is really required here. b. The new warning for buildpath error when an exisiting type is requested to be emulated as an annotation can be made simpler. It can be changed to something like "Null annotation problem: The type libpack.Lib requested to be emulated exists on the build path". 4. Copyrights in all files have to be changed to 2011. ;)
Markus, can you please take a look at the above patch and check the javadoc and the wording of the new warnings? It'll be good if you can state your opinion on points 3a and 3b I raised in the above comment. Stephan, also forgot to ask - does this patch allow a user to use an already declared annotation (in either a binary or source file) to be used as null annotation?
Hi Ayush, thanks for the comments. (follows: mostly discussion of implementation details) (In reply to comment #47) > I have a few small comments. The patch overall looks good. To begin with, I > have a few questions: > > 1. I think the intention in CompilationUnitScope.java:313 onwards is to cache > only the null annotation imports, but aren't we ending up caching all > non-on-demand imports? No, this branch is only reached when this.referenceContext.imports == null. It protects the early exit, which doesn't reach the this.typeOrPackageCache.put(..) statement at the bottom of this method. So in line 313 ff we only cache imports that do not exist in the source file. > 2. What are the changes in SourceTypeBinding for, where the method binding is > marked as null? The real change here is that bindArguments() is now executed earlier (see comment 45). In order to call bindArguments() in the same condition as before the patch, I pulled up the nulling of the method binding from the if (foundArgProblem) {... block close to the bottom. See also the check "if (this.binding == null)" inside bindArguments(). > 3. Shouldn't there be new options in JavaCore corresponding to the 2 options > of configuring emulation and default import? You're right. I'll add these as they will be needed for the UI part. Also the options for configuring the annotation names should be exposed. (I'll file a bug regarding the UI part shortly). > 4. If I dont specify any annotations of my own but use the compiler's default > @NonNull, and @Nullable, why do I still need to turn on the emulation option? I'm not 100% sure I understand which combination you mean, but normally I still want to report unresolved annotation names. I.e., without explicitly enabling the emulation magic the compiler behaves "normal". > Besides these, there are a few things that should be done: > > 1. ... > 2. ... agree > 3. About the warnings: > a. I'll answer in a separate comment. > b. The new warning for buildpath error when an exisiting type is requested to > be emulated as an annotation can be made simpler. It can be changed to > something like "Null annotation problem: The type libpack.Lib requested to be > emulated exists on the build path". If all agree that this is sufficient for explanation I won't object. I was just trying to be careful to really explain what's going on. > 4. Copyrights in all files have to be changed to 2011. ;) :) (In reply to comment #48) > Stephan, also forgot to ask - does this patch allow a user to use an already > declared annotation (in either a binary or source file) to be used as null > annotation? Yes, this is my intention. That's one of the cases where you want to turn off the emulation support to be really sure that the intended real annotation type will be picked up properly. I forgot to write tests for this, and in fact, while writing a test for this now I just discovered a bug in my initialization logic. Next patch will include tests and a fix.
Follows: conceptual part of the discussion: (In reply to comment #47) > 3. About the warnings: > a. I think we dont really need a separate warning for potential null contract > violation when there is insufficient info to determine it. In a way, both > variants of potential null contract violations (one arising out of a possible > null value in the analysis and the other arising out of lack of null > annotations), lead to the same problem. I dont think making the error message > more wordy to convey why the compiler thinks its a potential null contract > violation is really required here. My separation into 2 irritants can be seen as a response to Michael's comment bug 331647 comment 5. If you call a method of an unannotated class you have no chance to prove null-safety even if you 'know' the given method does not return null. Should we force developers to insert redundant null checks in this situation? To me it seems similar to the issue of unavoidable type safety warnings when mixing 1.4 and 1.5 code: some developers may want to see all issues and others will want to suppress exactly those warnings they cannot resolve themselves (i.e., without the cooperation of a library provider, e.g.). So, in case of potentialNullContractViolation the developer should improve his/her code because the uncertainty is caused within this method (like a missing null check on one particular path within the method body). By contrast, when you see nullContractInsufficientInfo, you may want to add annotations to the method being called, but if you don't own that piece of code you may be stuck and wish to no longer see the warning, until the method has been annotated. Admittedly, the implementation gives nullContractInsufficientInfo also in some cases, where the issue can indeed be fixed locally. E.g., when a method argument is not annotated, inside this method we'll raise insufficientInfo warnings. Technically, insufficientInfo maps to the start state where we know nothing about the nullness of a variable, whereas potentiallyNull maps to the state where we know that on some paths null will occur. Maybe it's even best to raise potentiallyNullContractViolation as an error by default. What do you think?
(In reply to comment #49) > Hi Ayush, thanks for the comments. I agree with your explanations. > > 4. If I dont specify any annotations of my own but use the compiler's default > > @NonNull, and @Nullable, why do I still need to turn on the emulation option? > > I'm not 100% sure I understand which combination you mean, but normally > I still want to report unresolved annotation names. I.e., without explicitly > enabling the emulation magic the compiler behaves "normal". About this, I was referring to the case where I don't explicitly assign any values to NonNull and Nullable annotations (i.e. leave these blank), expecting to use default null annotations. I still have to turn on emulation for it to work. But I guess the user shouldnt be bothered about emulation when he's not specifying his own annotations. > > b. The new warning for buildpath error when an exisiting type is requested to > > be emulated as an annotation can be made simpler. It can be changed to > > something like "Null annotation problem: The type libpack.Lib requested to be > > emulated exists on the build path". > > If all agree that this is sufficient for explanation I won't object. > I was just trying to be careful to really explain what's going on. Let's see what Markus has to say about this? > Next patch will include tests and a fix. That'll be great! (In reply to comment #50) > Technically, insufficientInfo maps to the start state where we know nothing > about the nullness of a variable, whereas potentiallyNull maps to the state > where we know that on some paths null will occur. Point taken. > Maybe it's even best to raise potentiallyNullContractViolation as an error > by default. What do you think? I think its ok to have it as a warning only, and let the user decide what to do next. But, I'm not very strongly inclined this way or the other.
(In reply to comment #51) > About this, I was referring to the case where I don't explicitly assign any > values to NonNull and Nullable annotations (i.e. leave these blank), expecting > to use default null annotations. I still have to turn on emulation for it to > work. But I guess the user shouldnt be bothered about emulation when he's not > specifying his own annotations. Assuming we need some kind of master switch to turn all this on/off, I see two alternatives: a) If any of the diagnostics regarding null contract violations is enabled (warning or error) and if no custom annotation names are specified then automatically enable annotation emulation, because we need annotations to analyse. b) Use annotation names / emulation as the master switch: if none of them is specified no analysis will be performed. The current patch implements b) but if there's a crowd voting for a) I will happily change the implementation ;) > > Maybe it's even best to raise potentiallyNullContractViolation as an error > > by default. What do you think? > > I think its ok to have it as a warning only, and let the user decide what to do > next. But, I'm not very strongly inclined this way or the other. Here is the test case that makes me lean towards making potentiallyNullContractViolation stronger than it is currently: public class X { @NonNull Object getObject(@Nullable Object o) { return o; } } 1. WARNING in X.java (at line 3) return o; ^^^^^^^^^ Potential null contract violation: return value can be null but method is declared as @NonNull. I would like to remove the "Potential" from the message and then it feels more like an error than a warning. Anybody else to comment on these two questions?
Let add my 2 cents to this discussion. When no annotations are specified as null annotations, it should work as it does without the patch. The warning/errors messages should be the same and no more warning/errors should be created than the current case. Adding such annotations should not have a impact on performance. I'd like to get numbers for this if possible: The numbers should include a significant set of files to be compiled in the following modes: - current code (reference time) - with the patch and no null annotations defined - with the patch and null annotations defined but not used The case where the patch is released with null annotations defined and used seems to be tricker to check as the source would have changed. So the numbers would not be that meaningful.
(In reply to comment #53) > Let add my 2 cents to this discussion. > When no annotations are specified as null annotations, it should work as it > does without the patch. The warning/errors messages should be the same and no > more warning/errors should be created than the current case. OK > Adding such annotations should not have a impact on performance. I'd like to > get numbers for this if possible: ... The implementation introduces no expensive new analysis, but I'll be happy to show using numbers rather than hand-waving :) Would you want automated performance tests (which I would need some help for), or just results from a one-time manual experiment?
I think existing performance tests should show potential regressions for the case "with the patch and no null annotations defined". So no need to add more. But I would like to see an extensive code coverage of the new code inside regression tests :-). This being said, I haven't checked the current patch yet.
Created attachment 186796 [details] tests & implementation v3 Here are some improvements and cleanups based on previous comments. * Updated copyright. * Expose more option ids in JavaCore and documented those. * Changed potentialNullContractViolation from warning to error and slightly rephrased (see comment 52). (In reply to comment #55) > But I would like to see an extensive code coverage of the new code inside > regression tests :-). I actually played a bit with JaCoCo and used its reports to achieve a near-100% branch coverage of all my code changes by adding some more tests: - more combinations of (un)annotated parameters - more tests for inheritance and redefinition - more combinations of configuration options (incl. illegal combinations and attempts to use non-existent types without emulation) Those tests actually detected a few bugs which are already fixed in the patch.
For easier reference here are the relevant textual parts: API (JavaCore) and problem messages. Please let me know if anything is unclear or if you have more concise suggestions. /** * Compiler option ID: Name of Annotation Type for Nullable Types. * <p>This option defines a fully qualified Java type name that the compiler may use * to perform special null analysis.</p> * <p>If the annotation specified by this option is applied to a type in a method * signature or variable declaration this will be interpreted as a contract that * <code>null</code> is a legal value in that position. Currently supported * positions are: method parameters and method return type.</p> * <p>If a value whose type * is annotated with this annotation is dereferenced without checking for null * the compiler will trigger a diagnostic as further controlled by * {@link #COMPILER_PB_POTENTIAL_NULL_REFERENCE}.</p> * <p>The compiler may furthermore check adherence to the null contract as further * controlled by {@link #COMPILER_PB_NULL_CONTRACT_VIOLATION}, * {@link #COMPILER_PB_POTENTIAL_NULL_CONTRACT_VIOLATION} and * {@link #COMPILER_PB_NULL_CONTRACT_INSUFFICIENT_INFO}. * </p> * <dt>Option id:</dt><dd><code>"org.eclipse.jdt.core.compiler.annotation.nullable"</code></dd> * <dt>Possible values:</dt><dd>any legal Java type name</dd> * <dt>Default:</dt><dd><code>"org.eclipse.jdt.annotation.Nullable"</code></dd> * @since 3.7 * @category CompilerOptionID */ public static final String COMPILER_NULLABLE_ANNOTATION_NAME = PLUGIN_ID + ".compiler.annotation.nullable"; //$NON-NLS-1$ /** * Compiler option ID: Name of Annotation Type for Non-Null Types. * <p>This option defines a fully qualified Java type name that the compiler may use * to perform special null analysis.</p> * <p>If the annotation specified by this option is applied to a type in a method * signature or variable declaration this will be interpreted as a contract that * <code>null</code> is <b>not</b> a legal value in that position. Currently * supported positions are: method parameters and method return type.</p> * <p>For values declared with this annotation the compiler will never trigger a null * reference diagnostic (as controlled by {@link #COMPILER_PB_POTENTIAL_NULL_REFERENCE} * and {@link #COMPILER_PB_NULL_REFERENCE}), because the assumption is made that null * will never occur at runtime in these positions.</p> * <p>The compiler may furthermore check adherence to the null contract as further * controlled by {@link #COMPILER_PB_NULL_CONTRACT_VIOLATION}, * {@link #COMPILER_PB_POTENTIAL_NULL_CONTRACT_VIOLATION} and * {@link #COMPILER_PB_NULL_CONTRACT_INSUFFICIENT_INFO}. * </p> * <dt>Option id:</dt><dd><code>"org.eclipse.jdt.core.compiler.annotation.nonnull"</code></dd> * <dt>Possible values:</dt><dd>any legal Java type name</dd> * <dt>Default:</dt><dd><code>"org.eclipse.jdt.annotation.NonNull"</code></dd> * @since 3.7 * @category CompilerOptionID */ public static final String COMPILER_NONNULL_ANNOTATION_NAME = PLUGIN_ID + ".compiler.annotation.nonnull"; //$NON-NLS-1$ /** * Compiler option ID: Emulate Null Annotation Types. * <p>When enabled, the compiler will use the annotation types specified in * {@link #COMPILER_NONNULL_ANNOTATION_NAME} and {@link #COMPILER_NULLABLE_ANNOTATION_NAME} * without searching for a corresponding type definition, ie., these annotation * types don't have to actually exist.</p> * <p>This option is used to make null contract analysis independent of any additional * classes that would otherwise need to be supplied at compile time.</p> * <dt>Option id:</dt><dd><code>"org.eclipse.jdt.core.compiler.annotation.emulate"</code></dd> * <dt>Possible values:</dt><dd>{ "disabled", "enabled" }</dd> * <dt>Default:</dt><dd><code>"disabled"</code></dd> * @since 3.7 * @category CompilerOptionID */ public static final String COMPILER_EMULATE_NULL_ANNOTATION_TYPES = PLUGIN_ID + ".compiler.annotation.emulate"; //$NON-NLS-1$ /** * Compiler option ID: Default Import of Null Annotation Types. * <p>When enabled, the compiler will be able to resolve the annotation types specified in * {@link #COMPILER_NONNULL_ANNOTATION_NAME} and {@link #COMPILER_NULLABLE_ANNOTATION_NAME} * by their simple names without an explicit import statement.</p> * <p>This option is used to avoid mentioning the fully qualified annotation names * in any source files, as to facility the migration from one set of annotations * to another, e.g., when standard annotations for this purpose will be defined * in the future. * </p> * <dt>Option id:</dt><dd><code>"org.eclipse.jdt.core.compiler.annotation.defaultImport"</code></dd> * <dt>Possible values:</dt><dd>{ "disabled", "enabled" }</dd> * <dt>Default:</dt><dd><code>"disabled"</code></dd> * @since 3.7 * @category CompilerOptionID */ public static final String COMPILER_DEFAULT_IMPORT_NULL_ANNOTATION_TYPES = PLUGIN_ID + ".compiler.annotation.defaultImport"; //$NON-NLS-1$ /** * Compiler option ID: Reporting Violations of Null Contracts. * <p>When enabled, the compiler will issue an error or a warning whenever one of the * following situations is detected: * <ol> * <li>A method declared with a nonnull annotation returns an expression that is * statically known to evaluate to a null value.</li> * <li>An expression that is statically known to evaluate to a null value is passed * as an argument in a method call where the corresponding parameter of the called * method is declared with a nonnull annotation.</li> * <li>A method that overrides an inherited method declared with a nonnull annotation * tries to relax that contract by specifying a nullable annotation * (prohibition of contravariant return).</li> * <li>A method that overrides an inherited method which has a nullable declaration * for at least one of its parameters, tries to tighten that null contract by * specifying a nonnull annotation for its corresponding parameter * (prohibition of covariant parameters).</li> * </ol> * </p> * <p>The compiler options {@link #COMPILER_NONNULL_ANNOTATION_NAME} and * {@link #COMPILER_NULLABLE_ANNOTATION_NAME} control which annotations the compiler * shall interpret as nonnull or nullable annotations, respectively. * </p> * <dl> * <dt>Option id:</dt><dd><code>"org.eclipse.jdt.core.compiler.problem.nullContractViolation"</code></dd> * <dt>Possible values:</dt><dd><code>{ "error", "warning", "ignore" }</code></dd> * <dt>Default:</dt><dd><code>"error"</code></dd> * </dl> * @since 3.7 * @category CompilerOptionID */ public static final String COMPILER_PB_NULL_CONTRACT_VIOLATION = PLUGIN_ID + ".compiler.problem.nullContractViolation"; //$NON-NLS-1$ /** * Compiler option ID: Reporting Violations of Null Contracts with Potential Null Value. * <p>When enabled, the compiler will issue an error or a warning whenever one of the * following situations is detected: * <ol> * <li>A method declared with a nonnull annotation returns an expression that is * statically known to evaluate to a null value on some flow.</li> * <li>An expression that is statically known to evaluate to a null value on some flow * is passed as an argument in a method call where the corresponding parameter of * the called method is declared with a nonnull annotation.</li> * </ol> * </p> * <p>The compiler options {@link #COMPILER_NONNULL_ANNOTATION_NAME} and * {@link #COMPILER_NULLABLE_ANNOTATION_NAME} control which annotations the compiler * shall interpret as nonnull or nullable annotations, respectively. * </p> * <dl> * <dt>Option id:</dt><dd><code>"org.eclipse.jdt.core.compiler.problem.potentialNullContractViolation"</code></dd> * <dt>Possible values:</dt><dd><code>{ "error", "warning", "ignore" }</code></dd> * <dt>Default:</dt><dd><code>"error"</code></dd> * </dl> * @since 3.7 * @category CompilerOptionID */ public static final String COMPILER_PB_POTENTIAL_NULL_CONTRACT_VIOLATION = PLUGIN_ID + ".compiler.problem.potentialNullContractViolation"; //$NON-NLS-1$ /** * Compiler option ID: Reporting Insufficient Information for Analysing Adherence to Null Contracts. * <p>When enabled, the compiler will issue an error or a warning whenever one of the * following situations is detected: * <ol> * <li>A method declared with a nonnull annotation returns an expression for which * insufficient nullness information is available for statically proving that no * flow will pass a null value at runtime.</li> * <li>An expression for which insufficient nullness information is available for * statically proving that it will never evaluate to a null value at runtime * is passed as an argument in a method call where the corresponding parameter of * the called method is declared with a nonnull annotation.</li> * </ol> * Insufficient nullness information is usually a consequence of using other unannotated * variables or methods. * </p> * <p>The compiler options {@link #COMPILER_NONNULL_ANNOTATION_NAME} and * {@link #COMPILER_NULLABLE_ANNOTATION_NAME} control which annotations the compiler * shall interpret as nonnull or nullable annotations, respectively. * </p> * <dl> * <dt>Option id:</dt><dd><code>"org.eclipse.jdt.core.compiler.problem.nullContractInsufficientInfo"</code></dd> * <dt>Possible values:</dt><dd><code>{ "error", "warning", "ignore" }</code></dd> * <dt>Default:</dt><dd><code>"warning"</code></dd> * </dl> * @since 3.7 * @category CompilerOptionID */ public static final String COMPILER_PB_NULL_CONTRACT_INSUFFICIENT_INFO = PLUGIN_ID + ".compiler.problem.nullContractInsufficientInfo"; //$NON-NLS-1$ 880 = Null contract violation: returning null from a method declared as @{0}. 881 = Null contract violation: return value can be null but method is declared as @{0}. 882 = Potential null contract violation: insufficient nullness information regarding return value while the method is declared as @{0}. 883 = Null contract violation: passing null to a parameter declared as @{0}. 884 = Null contract violation: potentially passing null to a parameter declared as @{0}. 885 = Potential null contract violation: insufficient nullness information regarding a value that is passed to a parameter declared as @{0}. 886 = Buildpath problem: emulation of type {0} is requested (for null annotations) but a type of this name exists on the build path. 887 = Buildpath problem: the type {0} which is configured as a null annotation type cannot be resolved. 888 = Cannot relax null contract for method return, inherited method from {0} is declared as @{1}. 889 = Cannot tighten null contract for parameter {0}, inherited method from {1} declares this parameter as @{2}. 890 = Cannot tighten null contract for parameter {0}, inherited method from {1} does not constrain this parameter.
Created attachment 186798 [details] API doc SORRY, pasting API doc into bugzilla was no good idea, here's an extract from the HTML Javadoc instead.
(In reply to comment #53) > Adding such annotations should not have a impact on performance. I'd like to > get numbers for this if possible: > The numbers should include a significant set of files to be compiled in the > following modes: > - current code (reference time) > - with the patch and no null annotations defined > - with the patch and null annotations defined but not used I made an experiment in a runtime workbench, running full builds of jdt.core, but I'm not sure about the validity of these measurements. I compared the same code with the new analysis disabled and enabled. Not counting the first few runs for warm up I got from 10 runs each: Disabled: average: 6946 ms std dev: 543 Enabled: average: 6510 ms std dev: 309 I have no idea what would make the disabled case (!) more expensive so I tend to interpret the difference as noise in the experiment, given the pretty high deviation (environment: dual core box with linux and kde, sun jvm 1.6.22). Any hints on how to reduce the deviation or is this about as good as it gets in manual experiments on a JVM with jit and gc? BTW: the integration in TestAll was wrong (adding to standardTests where I should add to since_1.5) and the patch also misses one constant in CompilerInvocationTests. Both issues are trivially fixed.
(In reply to comment #49) > You're right. I'll add these as they will be needed for the UI part. > Also the options for configuring the annotation names should be exposed. > (I'll file a bug regarding the UI part shortly). That's bug 334455 (I like that number :) )
Created attachment 186872 [details] tests & implementation v4 Patch updated to head and with the minor changes mentioned in comment 59. As I haven't seen any real controversy regarding the latest patches I plan to release for tomorrows nightly, so we get initial, real performance data and also give JDT/UI a chance to do bug 334455. (BTW: tests.compiler have been run successfully).
Hi Stephan, I'm really looking forward to this feature. However, I've a question regarding the propagation of nullability information from super classes to subclasses. Does it work for interfaces, too? E.g. if I define a contract for an interface, will all implementors be checked against this contract and all clients validated accordingly? This particular question came up because I could not find any test case for this scenario in the attached and I'm not that familiar with the AST / Binding APIs to derive that information from looking at the code.
(In reply to comment #62) > Hi Stephan, > > I'm really looking forward to this feature. However, I've a question regarding > the propagation of nullability information from super classes to subclasses. > Does it work for interfaces, too? E.g. if I define a contract for an interface, > will all implementors be checked against this contract and all clients > validated accordingly? Hi Sebastian, thanks for asking. It indeed works for interfaces, too. My bad not to include tests for this. I'll upload a new patch soon. This will also be reflected in the documentation (replace "overrides" with "overrides or implements"). I have, however, filed a fup as bug 334457, which will not be covered by this bug.
(In reply to comment #63) > It indeed works for interfaces, too. Awesome!
Created attachment 186876 [details] tests & implementation v5 Some small updates: Null contracts in interfaces: - add/modify tests (test_parameter_contract_inheritance_00*()) - slightly update API doc - no further implementation changes needed Local variables: - Trivial changes to support null annotations for local variables, too. New checks regard local variable initialization and assignment. - new test_nonnull_local_001() - slightly updated API doc re applicability of null annotations New test for illegal application of null annotation to a class (during development this had raised an NPE).
Created attachment 186889 [details] tests & implementation v6 Last (trivial) fix before release: Avoid analyzing static methods regarding null contract overriding. (Incl. new test). I also wrote some user level documentation of the current implementation in http://wiki.eclipse.org/JDT_Core/Null_Analysis#Actual_Stragegy_in_the_JDT
Released for 3.7M5.
I agree that language support for null analysis is missing in Java and would be worthwhile to add. But I don't think the JDT compiler is the right place to specify these annotations. Eclipse.org is not a standards organization. The goal of the JDT project is to provide a compiler for the existing Java language. The official JDT release is not an experimentation field for new language features. Such experiments should be performed externally (e.g. in a branch of the JDT project or in separate plug-ins) and should only be integrated into JDT if there's confidence that they have a complete specification and that they can be considered a standard. Even if you add configuration options in JavaCore to support annotations with arbitrary names, the compiler still implements a certain behavior that must be fully specified somewhere. So, the only flexibility that these options provide is support for multiple versions of exactly the same specification but with differing type names. That's not a realistic scenario, so we could just as well choose one set of annotation names and implement a compiler for those. JavaCore.COMPILER_DEFAULT_IMPORT_NULL_ANNOTATION_TYPES is a no-go, since that would lead to code bases that are not compilable any more by a standards-adhering Java compiler. Please revert the patch until it's clear how we will proceed.
(In reply to comment #67) > Released for 3.7M5. Hey hey! There's no rush Stephan! Sorry i couldnt look at your latest patch since we have a long weekend here. Markus' +1 was anyways very much required before releasing. Markus, I do agree that JDT's primary goal is to be a standard compiler, but I think we have always been game for features that could make the developer's task easy. Static analysis currently has a lot of limitations with what it can do and there are only limited p(potential) problems that we detect. In this respect, compilers of other IDEs sometime score higher above JDT. Null annotations are one such feature, though not standard, are supported by a few compilers out there. We did spend some time trying to figure out what it would cost to introduce this feature of annotations, and then only when we could ascertain that it wont take too much time from our side since we could use existing infrastructure, and also that it wont take too much time of the compiler to deal with these annotations, did we decide to go ahead and work on them. However, I'm partly in agreement with you, especially on the point that code annotated with null annotations will no longer be compilable with even javac, and I did raise my concern on this on the calls. The demand for this feature, though is pretty high, and that made me mellow down a bit. That said, I would still vote in favour of this option, since its completely configurable, and if anyone wants to keep his codebase standard enough to be compilable across compilers, he can chose to not use this feature.
Sorry if releasing this patch seems rushed to anybody. I just didn't see any concerns expressed which I hadn't addressed. On the other hand I didn't want to get into a rush towards the end of next week when we're closing down on M5. Olivier asked about performance data and the only way for me to obtain such data is by pushing the patch into an official build. May I wait to get those performance data and revert only after that or is an immediate revert required? So much about the timing - strategic arguments in a next comment.
(In reply to comment #70) > Sorry if releasing this patch seems rushed to anybody. > I just didn't see any concerns expressed which I hadn't addressed. To be more explicit: "concerns regarding the implementation". I'm aware that the wording (API doc and error messages) could use another review.
(In reply to comment #68) > Even if you add configuration options in JavaCore to support annotations with > arbitrary names, the compiler still implements a certain behavior that must be > fully specified somewhere. So, the only flexibility that these options provide > is support for multiple versions of exactly the same specification but with > differing type names. That's not a realistic scenario, so we could just as well > choose one set of annotation names and implement a compiler for those. This is one of the reasons why I split the feature into several bugs. In this bug I only implemented a core behavior that follows two rules: - if an entity is declared as nonnull it is illegal to assign a null value - if a method with a null contract is inherited the sub-class's version must be compatible with the inherited contract. I don't see how any reasonable tool would implement different behavior in these regards. Those are not standards defined by an standards body but standards defined by the mathematical foundations of object oriented programming. So my point is actually falsifiable: show me a different behavior that still conforms to standard type theory like Liskov's substitution principle etc. If I see such an example I'll stand corrected.
The JDT compiler already does lots of things not specified in the JLS. Most of its configurable warnings are a non-standard service to the users. Some of these warnings additionally support specification inside the code to control the warnings even without @SuppressWarnings: think of //$NON-NLS-1 or //$FALL-THROUGH$ Given that these are acceptable extensions there's of course a very simple trick to move null contracts to the same space: replace null annotations with proprietary comments, possibly javadoc tags. I believe this is an unbreakable fallback, but for several reasons the majority of people working on this topic seem to strongly prefer annotations, mostly because: - using proprietary code comments makes migration to other tools or to a possible future very cumbersome. - tags in comments cannot be seen when compiling against .class files, (unless secretely mapping comments to some bytecode attributes) So while this is a safe fallback it's a very ugly one, I think.
I have reverted the patch as requested. Can someone inside IBM please run the performance tests against the patch?
Created attachment 186890 [details] tests & implementation v7 I found a better place to call bindArguments() from. The previous approach had a problem in the pathological case of a type importing its own methods.
(In reply to comment #74) > I have reverted the patch as requested. > > Can someone inside IBM please run the performance tests against the patch? Sure. We'll take care of this today. It takes a day to get the results. Thanks Stephan! Also, personally I'm more in favour of annotations than javadoc comments, for the same reason that annotations were made part of java language in the first place! :)
(In reply to comment #70) > Sorry if releasing this patch seems rushed to anybody. > I just didn't see any concerns expressed which I hadn't addressed. > > On the other hand I didn't want to get into a rush towards the end > of next week when we're closing down on M5. Olivier asked about > performance data and the only way for me to obtain such data is > by pushing the patch into an official build. > > May I wait to get those performance data and revert only after > that or is an immediate revert required? Please revert for now. This is quite some addition/change and it never appeared on the 3.7 plan or was discussed in a JDT call. Please hold off until all involved JDT parties have agreed on this. Thanks.
(In reply to comment #77) > Please revert for now. Done, see comment 74. > This is quite some addition/change and it never appeared > on the 3.7 plan or was discussed in a JDT call. Sorry, I didn't know that there is another regular call besides the one of the JDT/Core team.
(In reply to comment #74) > I have reverted the patch as requested. > > Can someone inside IBM please run the performance tests against the patch? I am running the performance tests. It will take a while before I could get you the numbers.
(In reply to comment #73) > The JDT compiler already does lots of things not specified in the JLS. > Most of its configurable warnings are a non-standard service to the users. > [..] The fundamental difference is that all the existing additions are fully backwards-compatible, but COMPILER_DEFAULT_IMPORT_NULL_ANNOTATION_TYPES is not. OTOH, I fully agree that proprietary comments are not the way to go today. (In reply to comment #72) > In this bug I only implemented a core behavior that follows two rules: > - if an entity is declared as nonnull it is illegal to assign a null value > - if a method with a null contract is inherited the sub-class's version > must be compatible with the inherited contract. Thanks for the explanation, I wasn't aware of this. Just to spell this out explicitly: I'm not in principle against this feature, and I appreciate your work in this area. But we have to be careful that we don't add features that lock us in (since our policy is that we don't remove features after they have been shipped in a release). I'm not sure if this core behavior is sufficient for larger projects. I guess it would be better to implement this as an external annotation processor and only put it into the JDT compiler once it reaches some adoption and we want to put it into the compiler for performance reasons. I tried the support for emulated annotation types, and that's also problematic, since it breaks a few existing APIs, e.g. IPackageBinding#getJavaElement() returns null for the non-existent package, or the ASTView throws NPEs like this: !ENTRY org.eclipse.jdt.astview 4 4 2011-01-17 12:35:41.302 !MESSAGE Exception thrown in IBinding#getAnnotations() for "Lorg/eclipse/jdt/annotation/NonNull;" !STACK 0 java.lang.NullPointerException at org.eclipse.jdt.internal.compiler.lookup.AnnotationBinding.buildTargetAnnotation(AnnotationBinding.java:98) at org.eclipse.jdt.internal.compiler.lookup.AnnotationBinding.addStandardAnnotations(AnnotationBinding.java:57) at org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.retrieveAnnotations(BinaryTypeBinding.java:1096) at org.eclipse.jdt.internal.compiler.lookup.ReferenceBinding.getAnnotations(ReferenceBinding.java:728) at org.eclipse.jdt.core.dom.TypeBinding.getAnnotations(TypeBinding.java:93) at org.eclipse.jdt.astview.views.Binding.getChildren(Binding.java:243) at org.eclipse.jdt.astview.views.ASTViewContentProvider.getChildren(ASTViewContentProvider.java:95) A better solution would be to put a small JAR with the necessary annotations on the classpath. That avoids all the emulation problems, allows to add Javadoc for the annotations, and it also allows other compilers to build a project that references these annotations.
(In reply to comment #80) > Just to spell this out explicitly: I'm not in principle against this feature, > and I appreciate your work in this area. thanks > I'm not sure if this core > behavior is sufficient for larger projects. Well, actually I'm eager to proceed to bug 331647 and bug 331651 as those are key for application in larger projects. I just wanted to go one step at a time. > I guess it would be better to > implement this as an external annotation processor Wouldn't that imply that we duplicate all the existing static analysis? The big synergy is that with the input from null annotations the existing null analysis can give *much* more valuable diagnostics. If it _must_ be kept separate I would rather ship it as an Object Teams plugin that hooks directly into JDT/Core, meaning I could use the exact same strategy whild keeping the JDT/Core clean of non-standard stuff. > and only put it into the JDT > compiler once it reaches some adoption and we want to put it into the compiler > for performance reasons. And then, if it remains a separate tool there will be little gain over telling people to continue using 3rd party tools like FindBugs or the Checker Framework. I'd really love to ship this to the developers *without* the need to install additional software, perfectly integrated with JDT experience. > I tried the support for emulated annotation types, and that's also problematic, > since it breaks a few existing APIs, OK, that wasn't covered by the tests I ran, but I'm sure it can be fixed. And that's the kind of feedback I was seeking before M5 is declared :) > A better solution would be to put a small JAR with the necessary annotations on > the classpath. We discussed that (and it is certainly possible with the current patch), but we concluded that it is problematic to ship a compiler feature that only works with additional libs added to the classpaths of each individual project. That avoids all the emulation problems, allows to add Javadoc > for the annotations, and it also allows other compilers to build a project that > references these annotations. Notably for the last reason we certainly need to support that scenario, too. But in my view its only one of several equally desirable options.
(In reply to comment #77) > Please revert for now. This is quite some addition/change and it never appeared > on the 3.7 plan or was discussed in a JDT call. Please hold off until all > involved JDT parties have agreed on this. Thanks. I didn't want to talk about this to the JDT call as long as I wasn't sure this could be done in a flexible way. I think this is a nice new feature in the compiler. This is limited to the compiler that is not API. So we do have some freedom to make changes to this implementation in the future. We could also mention it as an experimental support in the N&N for M5 and collect feedbacks from users. At least this closes the gap we have right now with IDEA IDE which has this kind of support. We will discuss this tomorrow. For me as long as we don't break existing APIs (this is of course a prerequisite) and the performance impact is not significant (see comment 59), then we should do it.
As an input for the JDT call I'd like to summarize the risks and benefits of the three levels of configurability provided by the current patch (from my personal POV of course :) ) For an explanation of these options please see either the API doc (attachment 186798 [details]) or the wiki (http://wiki.eclipse.org/JDT_Core/Null_Analysis#Configuring_Null_Annotations_for_the_JDT) Level 1: configure annotation names (options COMPILER_NULLABLE_ANNOTATION_NAME and COMPILER_NONNULL_ANNOTATION_NAME): - benefit: the feature works with annotations from different tools like FindBugs or IntelliJ etc. - risk: I'm not aware of any risks. Level 2: emulate annotation types (COMPILER_EMULATE_NULL_ANNOTATION_TYPES) - benefit: users who do not already have such annotations like from FindBugs can immediately start using the new feature without needing to add anything to their project's build path. Also we don't have to establish a new channel for distribution as it would be required if we provide such a library. - risk: if developers use this option and later decide to use other compilers they will need to supply actual annotation types (e.g., in a library). I don't, however, see any hard breakage or lock-in etc. I'm aware that currently this feature causes breakage re the non-existing package (comment 80) but I'm sure that can be fixed quickly. Level 3: implicit import of annotation types (COMPILER_DEFAULT_IMPORT_NULL_ANNOTATION_TYPES) - benefit: this allows to use the feature without mentioning the qualified names of annotation types in the source code. This would allow to migrate the same source code from one set of annotations to another without changing any source code. Actually this option was added specifically to facilitate migration to a future standard (should one arrive). - risk: code that uses this option will not be compilable with any other Java compiler. For that, explicit imports will have to be inserted. Given that the JDT knows the annotation names (from Level 1), it should however be feasible (and cool!) to implement a refactoring that automatically inserts these imports when a developer choses to establish compatibility with other compilers (a variant of organize imports). Obviously, the risks are increasing from 1 to 3. I personally feel that in all three cases the benefits outweigh the risks (given that there's always a way out). I could, however, understand concerns regarding level 3 and would be willing to remove this option to pave the road to an agreement. Level 1 I consider as a necessary option given that no standard exists. Level 2 ranges somewhere in the middle. It is unusual behavior but I cannot see how this option would actually incurr any harm to any user. Given that we are still within M5 I'd be happy to release, say, a version with levels 1 & 2 and explicitly ask for feedback. Then we could still adjust within the 3.7 time frame should any relevant concerns be raised from the community. Also, I would appreciate if you could make a first statement about the next two bugs on my list: bug 331647 [compiler][null] support flexible default mechanism for null-annotations Here I propose to define additional annotations that support hierarchically defining defaults like: "in this packages all method parameters without an explicit null annotation are defined to be @NonNull". This may be essential for adoption in big projects where it is simply not feasible to annotate every single type, but specifying smart defaults plus only a few exceptions may mean a big improvement. bug 331651 [compiler][null] Support nullity profiles for libraries Here I propose to support that null annotations can be superimposed over existing libaries, such that libraries without null annotations can still participate in the new analysis. Several research projects have actually computed null contracts, e.g., for the JDK from the byte code, and if a file format for nullity profiles is defined these results could be consumed by our analysis. These additional bugs are not necessary for the feature to work, but will have great impact on the adoption. For the default mechanism there are a few very relevant design decisions to be made, which I'd like to discuss earlier rather than later. Regarding the nullity profiles I essentially only see issues of choosing an appropriate file format, letting others supply the content. No pending design decisions that should significantly impact the behavior. I hope this helps for your discussion, thanks.
After internal discussion this feature is put on hold because of concerns that the semantics of these annotations are not commonly agreed upon. Since we have some of the leading researchers CCed on this bug I'd like to confirm with you whether my interpretation is in line with the prevailing opinion: (Following my Eiffel past I'm using "entity" to commonly refer to variables, arguments and method results). Null-annotations can be introduced either with a two valued logic or a three valued logic. The basic values are consensually seen as - cannot be null: + entities declared with this annotation should only be assigned from values that have the same property (incl. object instantiation). + dereferencing such a value does not require a null check. - can be null: + assigning/passing null to an entity declared with this annotation is OK. + dereferencing such a value should only be done with a prior check. Only the third value is subject to debate: in some approaches this only occurs when an entity has no annotation, in other approaches a specific third annotation is used. Since my proposal is based on only two annotation types the questionable semantics only affect entities that have no annotation. I see three concepts applying: 1 use the rules of subtyping 2 leave this case unspecified, it remains at the discretion of the compiler whether and what to report. 3 define scoping rules by which all un-annotated entities in a given scope can be annotated at once. 1 is a must in object oriented programming. 2 is a weak excuse. 3 indeed deserves some discussion (this is subject of bug 331647) If this interpretation can be confirmed I conclude that solving this issue is equal to finding an agreement in bug 331647. Did I miss anything?
Due to the recent discussion, this won't make it for M5. Adjusting target milestone.
>After internal discussion this feature is put on hold because of concerns >that the semantics of these annotations are not commonly agreed upon. This is not the only reason. We (JDT) want to be able to easily adopt the feature because we want to use our own features/code (this is one of our prime directives) and we want to gain experience how it plays in the real world before we unleash it: - How easy and time-consuming is it to set up an existing project like JDT Core and JDT UI? - How does the code look like afterwards (especially when it comes to annotated local variables)? - Once the project is (assumed) to be correctly setup: How many - useful additional warnings do we get? - unwanted warnings do we get? - issues are still not discovered? - How does it work across projects (some set up to using the feature and some not)? Also, it is not put on hold: the work on this feature can proceed but it needs a considerable amount of more work, e.g. provide a sound specification of the annotations and their scope, definition of defaults for the feature, discussion about that with the JDT team and after that, collect data from using the feature as outlined above.
> After internal discussion this feature is put on hold because of concerns > that the semantics of these annotations are not commonly agreed upon. > > Null-annotations can be introduced either with a two valued logic or a three > valued logic. The basic values are consensually seen as > - cannot be null: > + entities declared with this annotation should only be assigned > from values that have the same property (incl. object instantiation). > + dereferencing such a value does not require a null check. > - can be null: > + assigning/passing null to an entity declared with this annotation is OK. > + dereferencing such a value should only be done with a prior check. I believe the most standard names for these are @NonNull and @Nullable. These meanings are entirely standard, and are agreed upon by all researchers and all tools. The only exception is FindBugs, which has a non-standard definition of "@Nullable". You can find some discussion of this in the Checker Framework Manual: http://types.cs.washington.edu/checker-framework/#findbugs-nullable I would express them slightly differently, however: - cannot be null: + no possibly-null value may be assigned to such an entity + dereferencing the entity is permitted - can be null: + any value may be assigned to such an entity + dereferencing the entity is forbidden - a check against null changes the type of an entity from "can be null" to "cannot be null" This is just what Stephan has said, but phrased in a way that I believe is easier to understand and reason about. > Only the third value is subject to debate: in some approaches this only > occurs when an entity has no annotation, in other approaches a specific > third annotation is used. There is no third value. It is a fact about every reference type that it either includes or excludes null. The only question is how many annotations a programmer can/must write, and what defaults are used. There can indeed be a third annotation, but not a third nullness value. http://wiki.eclipse.org/JDT_Core/Null_Analysis#Degree_of_Annotating is incorrect when it conflates these issues. It's also incorrect when it claims that lack of defaults would make additional values necessary. Somehow it concludes from this that full prevention of null pointer dereferences is not feasible, but I don't follow that reasoning either. > Since my proposal is based on only two annotation types the questionable > semantics only affect entities that have no annotation. I see three concepts > applying: > 1 use the rules of subtyping > 2 leave this case unspecified, it remains at the discretion of the compiler > whether and what to report. > 3 define scoping rules by which all un-annotated entities in a given scope > can be annotated at once. Rather than "scoping rules", I would call these "defaulting rules". #1 and #3 are both defaulting rules. In one case these are being inherited from supertypes, and in the other case from lexically enclosing program elements (or from the package). > 1 is a must in object oriented programming. I agree that the checker must enforce the rules of subtyping. That is orthogonal to whether supertypes are used for defaults. I think it will be helpful to keep these issues separate.
I am a strong proponent of compile-time nullness checking, and I feel Eclipse -- and Eclipse's users -- would benefit from it. I don't fully understand the design of checker. Can a user predict what code will be flagged as errors? It seems that the answer is "no": the algorithm is not specified, and the tool will detect some null pointer errors, but it isn't specified which. Even if the checker gives your code a clean bill of health, there is still no guarantee that your code will not suffer a NPE. If we don't know exactly what the system is supposed to do, then it's difficult to have an informed discussion about design alternatives. For example, if there are only two annotations, how do you know that is enough to achieve the desired level of checking? Or, are you positing the two annotations, and defining the goal as whatever level of checking results is achievable? I would find it helpful to me to see a complete design, and a user manual. Then, we can give feedback on the design as a whole rather than on disjointed parts of it. There is http://wiki.eclipse.org/JDT_Core/Null_Analysis, but it addresses design as much as functionality, and it has some other problems. I think Stephan is very capable, but to build a system by accreting one feature on another runs the risk of ending up with an unacceptable design, even if each individual piece made sense when it was proposed. As part of the design rationale, it would be helpful to survey the related work in order to explicitly compare and contrast with other approaches that have been tried (FindBugs, the Checker Framework, other Eclipse plugins such as http://types.cs.washington.edu/checker-framework/eclipse/, etc.). For each one, what do you want to emulate? What parts of each tool do you dislike, and why? Concrete experience with a complete implementation is probably even better than speculating about design alternatives. Maybe a separate plug-in might be better for the short run. It pains me to suggest delaying general availability to all Eclipse users, since nullness checking could be such a useful capability. But we need to get it right, and we would hope to transition it quickly into the main distribution.
(In reply to comment #86) > >After internal discussion this feature is put on hold because of concerns > >that the semantics of these annotations are not commonly agreed upon. > This is not the only reason. We (JDT) want to be able to easily adopt the > feature because we want to use our own features/code (this is one of our prime > directives) Does this mean, JDT will be allowed to use annotations any time soon? That would be wonderful news! > and we want to gain experience how it plays in the real world > before we unleash it: I played a bit with the implementation and I will report some findings below. > - How easy and time-consuming is it to set up an existing project like JDT Core > and JDT UI? Well, the actual "set up" means only setting one or two preferences. I guess that's not what you mean, but adoption will indeed include inserting appropriate annotations into the code. That I have not done, yet. > - How does the code look like afterwards (especially when it comes to annotated > local variables)? For most local variables annotations should not be necessary, as the compiler can already infer the null status pretty well. > - Once the project is (assumed) to be correctly setup: > How many > - useful additional warnings do we get? > - unwanted warnings do we get? > - issues are still not discovered? Before testing null warnings in the JDT/Core code I first enabled warnings for potential null dereference (which is off currently, see also bug 335118). This warning gives us a base level of 379 warnings in JDT/Core. Next I turned on annotation based null checking which didn't change the outcome because there are not annotations :) Then I tweaked the implementation to make nonnull the default for all our methods (parameters and return). Result: 2959 errors 19434 warnings It turns out, most errors relate to overriding methods from types outside the project, for which I still assumed no default. Overriding with nonnull default was flagged as illegal tightening of parameter contracts. I changed the implementation to ignore this specific situation. Result: 3256 errors 21790 warnings Yes, these are more, probably because the previous experiment avoided secondary errors, which now came to the surface. Details: Errors: 1299 passing null a nonnull parameter 1830 returning null from a nonnull method Warnings: 20103 Insufficient nullness information: passing a value from 3rd party unannotated entities into our nonnull entities 1075 redundant null comparison 414 dead code 182 potential null pointer access (Please excuse if numbers don't exactly sum up). Apparently the JDT/Core is programmed in a very defensive manner: reversing the default to nullable gave 0 errors and much fewer warnings: 2771 potential null pointer access 107 redundant null comparison 11 dead code From here would start the journey to actually insert annotations until all errors/warnings are resolved. > - How does it work across projects (some set up to using the feature and some > not)? I already mentioned the issue of interfacing with unannotated code. Working on the same code with different settings is certainly possible, although I wouldn't see a good reason for doing so. Those working without the feature will see fewer errors/warnings, because the analysis will revert to assuming nothing. They will also see some more (bogus) warnings, because dereferencing nonnull values is now safe, which they don't see with the feature turned off. In such mixed situation it is probably wisest to completely turn null analysis on or off. > Also, it is not put on hold: the work on this feature can proceed but it needs > a considerable amount of more work, e.g. provide a sound specification of the > annotations and their scope, definition of defaults for the feature, I already made a first shot at such specification both in the API doc and in more consumable form in http://wiki.eclipse.org/JDT_Core/Null_Analysis#Null_Contracts Please let me know what aspects you feel needing improvement. The overall default will be: off :) User-definable defaults are subject of bug 331647. I hope I'll find some time very soon, to add more ideas to the bug. > discussion > about that with the JDT team and after that, collect data from using the > feature as outlined above. To better enable experimentation not just by me I will upload an Object Teams based implementation once M5 is available. Would it be a good idea to also create a branch of the JDT/Core code just for the experiment of adding specific null annotations? I think it'd be great if we could collaborate on this in some way. From the large number of errors/warnings I strongly feel adding annotations must be done incrementally, perhaps one package at a time, perhaps even smaller steps. Considering the specifics of our code and taking also fields into the picture I envision that we will need different defaults like: fields in compiler.ast.*: nullable by default fields in compiler.lookup.*: nonnull by default.
(In reply to comment #87) > I believe the most standard names for these are @NonNull and @Nullable. > > These meanings are entirely standard, and are agreed upon by all > researchers and all tools. The only exception is FindBugs, which > has a non-standard definition of "@Nullable". That was my impression, too. Thanks for confirming. > > Only the third value is subject to debate: in some approaches this only > > occurs when an entity has no annotation, in other approaches a specific > > third annotation is used. > > There is no third value. It is a fact about every reference type that it > either includes or excludes null. In green-field language design I would agree, but we can't just discard the current semantics in Java, where each reference type has *both* conflicting properties: null is a legal value and dereferencing is legal, too. Existing Java programs certainly make use of this "feature" and for those programs no annotations will ever make them valid in the two-valued logic. I support the idea that ideally every type in the program should be either nullable or nonnull. In some comment I suggested to add another option for flagging any declaration that is neither marked nullable nor nonnull. If that flag is an error, we get the semantics you describe. If that is warning or ignore we have a "compatibility" mode that also copes with legacy code. Also, for a tertium-non-datur model we would need the capability to annotate each occurrence of a reference type, which would require use of JSR 308. Unfortunately, we can't ship support for JSR 308 at this point in time, so a number of locations must use the legacy Java types anyway. > http://wiki.eclipse.org/JDT_Core/Null_Analysis#Degree_of_Annotating is > incorrect when it conflates these issues. Please consider that part of the wiki page more as a brainstorming on design options. I did not intend to draw any definite conclusions but only elaborate options and motivations. > It's also incorrect when it > claims that lack of defaults would make additional values necessary. > Somehow it concludes from this that full prevention of null pointer > dereferences is not feasible, but I don't follow that reasoning either. Such conclusion was not my intention and I can't find where it says so. > > Since my proposal is based on only two annotation types the questionable > > semantics only affect entities that have no annotation. I see three concepts > > applying: > > 1 use the rules of subtyping > > 2 leave this case unspecified, it remains at the discretion of the compiler > > whether and what to report. > > 3 define scoping rules by which all un-annotated entities in a given scope > > can be annotated at once. > > Rather than "scoping rules", I would call these "defaulting rules". If you agree that we speak about defaults that affect all unannotated members of a given scope then I don't see a problem with either name. > #1 and > #3 are both defaulting rules. In one case these are being inherited from > supertypes, and in the other case from lexically enclosing program elements > (or from the package). > > > 1 is a must in object oriented programming. > > I agree that the checker must enforce the rules of subtyping. That is > orthogonal to whether supertypes are used for defaults. I think it will > be helpful to keep these issues separate. Separate discussion is fine. Is anybody actually suggesting to use subtyping without defaulting, i.e., force the developer to manually repeat inherited annotations? My intention is (and that I specified and implemented) that annotations are actually inherited unless overridden (in conforming ways). This means a reduction in locally visible information, but I'm sure we can help here with some UI support (like mentioning inherited annotations in hovers).
> Does this mean, JDT will be allowed to use annotations any time soon? > That would be wonderful news! JDT UI already switched some of its projects. For JDT Core it's planned to do during 3.8. Thanks a lot for the numbers Stephan. They show well how important it will be to make this feature configurable so that the resulting warnings on "my" project show a realistic picture and that the migration path is not so hard that it frights people off. I would support to continue this in a branch. It would be easier for us than using the Object Teams plug-in.
(In reply to comment #91) > > Does this mean, JDT will be allowed to use annotations any time soon? > > That would be wonderful news! > JDT UI already switched some of its projects. For JDT Core it's planned to do > during 3.8. > > Thanks a lot for the numbers Stephan. They show well how important it will be > to make this feature configurable so that the resulting warnings on "my" > project show a realistic picture and that the migration path is not so hard > that it frights people off. Exactly. And for projects of the size of JDT/Core there must be options for controlling it at a finer grain than only per project. > I would support to continue this in a branch. It would be easier for us than > using the Object Teams plug-in. Perhaps my comment re branching was unclear. I proposed this not for the implementation of the analysis but for applying the tool and adding actual null annotations to existing code. Given that some JDT UI projects already switched to Java 5 perhaps the easiest road would be to use one of those projects for real world application of our null annotations? I'd be happy to help there (and learn more about the JDT UI code :) Regarding the implementation of the analysis I would rather not do it in a branch. Keeping one more branch up-to-date is much more work than maintaining the implementation using Object Teams: Instead of a patch of 2953 lines of code affecting a significant number of classes I have refactored the implementation into one team class of just 953 LOC. So while we're keeping this separate from JDT/Core HEAD this is by far the cleanest solution. As to ease of use: you just need the Object Teams runtime (370kb) and then add the null analysis plugin. You can independently update JDT/Core and the null analysis, which is not possible when using a branch. The only downside of this approach is a performance penalty caused by the external integration into the core of the compiler. But it's still well usable and the feature can be easily turned off if it hurts :) OTOH, this shows that in the end this must for sure be integral part of the compiler, where it does not impact the performance. The OT version is in this case just for development / early adoption.
> > I would support to continue this in a branch. It would be easier for us than > > using the Object Teams plug-in. > > Perhaps my comment re branching was unclear. I proposed this not for the > implementation of the analysis but for applying the tool and adding actual > null annotations to existing code. I see. I'm not in favor of that. I would still expect that with the right setup, we won't so many warnings that an adoption of the feature is possible. If it's really that hard/expensive then it won't happen. > Regarding the implementation of the analysis I would rather not do it in > a branch. Keeping one more branch up-to-date is much more work than > maintaining the implementation using Object Teams: Instead of a patch of > 2953 lines of code affecting a significant number of classes I have > refactored the implementation into one team class of just 953 LOC. > So while we're keeping this separate from JDT/Core HEAD this is by far the > cleanest solution. k. > > As to ease of use: you just need the Object Teams runtime (370kb) and then > add the null analysis plugin. You can independently update JDT/Core and > the null analysis, which is not possible when using a branch. k. Can you provide a link to the plug-in once the plug-in is available plus steps to install? Is it EPL, so that we can easily try it out? If not, we (IBMers) will first have to get legal approval to download and use it.
(In reply to comment #93) > > > I would support to continue this in a branch. It would be easier for us than > > > using the Object Teams plug-in. > > > > Perhaps my comment re branching was unclear. I proposed this not for the > > implementation of the analysis but for applying the tool and adding actual > > null annotations to existing code. > I see. I'm not in favor of that. I would still expect that with the right > setup, we won't so many warnings that an adoption of the feature is possible. > If it's really that hard/expensive then it won't happen. OK, so let's drop the idea of a branch for adoption. Shall we then start using it in one of the JDT/UI projects? If it's OK to add just two (essentially empty) annotation types as, say, org.eclipse.jdt.internal.corext.annotation.Nullable org.eclipse.jdt.internal.corext.annotation.NonNull we could start right now actually, or more likely, right after shipping M5. > > As to ease of use: you just need the Object Teams runtime (370kb) and then > > add the null analysis plugin. You can independently update JDT/Core and > > the null analysis, which is not possible when using a branch. > k. Can you provide a link to the plug-in once the plug-in is available plus > steps to install? Is it EPL, so that we can easily try it out? If not, we > (IBMers) will first have to get legal approval to download and use it. Yes sure. All is EPL, all is installable using normal p2 UI, and yes I will give steps for install. I think I will make it available no later than next weekend.
> OK, so let's drop the idea of a branch for adoption. > Shall we then start using it in one of the JDT/UI projects? > If it's OK to add just two (essentially empty) annotation types as, say, > org.eclipse.jdt.internal.corext.annotation.Nullable > org.eclipse.jdt.internal.corext.annotation.NonNull > we could start right now actually, or more likely, right after shipping M5. You can start to use it as test case but clearly, no changes go into HEAD (of JDT UI) unless the feature itself has been proven to do the job and is in JDT Core HEAD.
(In reply to comment #88) > ... Concrete experience with a complete implementation is probably > even better than speculating about design alternatives. IMHO we have enough collective experience to make an well informed decision. E.g., The Java Modeling Language (JML) has had (the equivalent of) nullity annotations (@NonNull and @Nullable) as part of the language, and supported by its checkers, since the late '90s --- JML's ancestor notations and tools supported static nullity checks even before that. The new JmlEclipse (formerly JML4, the "JML JDT") has supported nullity annotations since 2006. JmlEclipse builds upon the existing JDT nullity checking code. In addition to @NonNull and @Nullable, it offers course grained control over the default nullity (for unannotated declarations) via a compiler option; and fine grained control using @NonNullByDefault or @NullableByDefault that can applied to top-level class or interface declarations. As I had mentioned to Stephan in a separate e-mail: having package level control may well be desirable and could be implemented via a compiler option that uses a list package names -- javax.*, org.eclipse.jdt.*. FYI, in 2006 we actually conducted an experiment in annotating part of the JDT core (non-invasively using .jml interface specification files, as can be done in JML). Back then our goal was to demonstrate that choosing non-null as the default generally matched designer intent and resulted in the need to manually annotate fewer declarations. In case there is interest: our most recent paper on the subject is: P. Chalin, P. R. James, and F. Rioux, “Reducing the Use of Nullable Types through Non-null by Default and Monotonic Non-null”, IET Software Journal, 2(6):515-531, 2008. As Michael has pointed out, and as is mentioned in Section 3.8.2 of the Checker Framework documentation: FindBugs is the odd one out. Other tools agree on the standard interpretation of @NonNull and @Nullable, and have a mechanism to specifying a default for non-annotated declarations. Hence, I believe that the JDT can opt for the (defacto) standard semantics and do as JmlEclipse and the Checker Framework has done and choose to "not emulate this non-standard behavior of FindBugs" [Section 3.8.2].
As mentioned above I have migrated the latest patch into an OT/Equinox plugin that can be installed on top of a plain vanilla Eclipse SDK 3.7 M5. See here for installation instructions http://wiki.eclipse.org/JDT_Core/Null_Analysis#Installing_the_prototype On that page you will also find the first steps for enabling and configuring the new analysis. The implicit import option is no longer supported, the emulation of annotation types is still possible, but the problems mentioned by Markus (comment 80) are actually more tricky than I first thought, so this option currently cause some minor havoc in the UI. Using custom annotation types is of course still possible. The prototype has some minor improvements over the last patch but hasn't been tested as rigorously as I'd have liked. As a special bonus I implemented some first quickfixes, which should give a good start for actually annotating your code. Also the quickfixes are briefly described on the wiki page. I played some more with using null annotations in the JDT/Core code, and will report about my experience shortly. OTOH, I'd be happy to hear what your impression is. Clearly, this isn't finished and for applicability in large existing projects I suppose we need - nullity profiles for libraries to reduce warnings from API usage - per package, per type defaults to facilitate incremental migration - more quickfixes for semi-automatically annotating existing code Of these, only the quickfixes item is new in this discussion.
After a forced break I'm back to working on my prototype. When playing with new quickfixes I noticed that two intended features are not too well supported by the existing infrastructure in JDT/Core: 1. Defining per-package defaults using annotations in package-info.java. 2. Inheritance of null contracts. Both features work just fine during full builds but may break, e.g., when Java model and dom are involved, too. Before going into details, let me ask whether you consider these concept worth putting more efforts into it. Regarding (1) Patrice suggested in bug 331647 to restrict annotations for nullity defaults to top-level types only. Please comment on bug 331647 whether you believe using package-info.java adds a valuable option to the feature. In this matter I can be easily persuaded either way. Issue (2) is a bit more involved. The options are: a) null annotations of a super method automatically affect all overriding methods (unless explicitely redefined within the rules of conformance). b) overriding methods must explicitly repeat the null annotations of the overridden method (with the same option of redefinition). In terms of reducing the effort of adding necessary annotations to the code I strongly favor (a) and that's what I've implemented from the beginning. Here Patrice said that he prefers explicit annotations for the sake of visibility. I'd like to counter that inserting too many annotations might be bad for the visibility of the actual code :-/ Within the IDE I would prefer other visualizations like hovers to show the effective null contract (incl. inheritance and applicable defaults). Please have your say on automatic versus manual inheritance of null annotations. Thanks.
Some more updates: The prototype now has four distinct quickfixes as described at http://wiki.eclipse.org/JDT_Core/Null_Analysis#Cleaning_up I've filed bug 337977 for those and there I could use some help on implementation details. The new analysis already helped me find one dormant bug, see bug 337964. It also would have prevented bug 337646 which I discovered while working on this bug :) Since I promised some more numbers from experiments let me quote myself from bug 337977: "In my experiments when I applied @NonNullByDefault to the package org.eclipse.jdt.internal.compiler.lookup the quickfixes let me reduce the numbers of errors/warnings from 346/1045 to 89/809. Up-to this point applying the quickfixes was mostly a no-brainer. In order to continue the cleanup from here on knowledge about design intent is needed, but still quickfixes can help a lot." To that point I only used information about definite null flows. The remaining errors complain about potential nulls and might call for some human judgment about design intent etc. I'll keep improving the fixes so humans only need to look at the interesting cases.
(In reply to comment #98) > Issue (2) is a bit more involved. The options are: > a) null annotations of a super method automatically affect all overriding > methods (unless explicitely redefined within the rules of conformance). > b) overriding methods must explicitly repeat the null annotations of the > overridden method (with the same option of redefinition). > In terms of reducing the effort of adding necessary annotations to the code > I strongly favor (a) and that's what I've implemented from the beginning. I would also prefer a) just because it is a contract which the super method defines and which shouldn't be broken unless you would like to break super implementation. Anything else doesn't make sense for me. > Here Patrice said that he prefers explicit annotations for the sake of > visibility. I'd like to counter that inserting too many annotations > might be bad for the visibility of the actual code :-/ Also 100% agree. Visibility here is should be given by the JDT UI, supporting this contract with some kind of overlay icon / decoration and of course javadoc + annotation hover.
FYI, bug 338339 proposes to release just the required API within the M6 time frame so we keep a door open for possibly getting this into 3.7. That bug also contains a (personally biased) summary of where we currently are. Also: I just uploaded version 0.7.1 of my prototype and updated the wiki http://wiki.eclipse.org/JDT_Core/Null_Analysis#Actual_Strategy_in_the_JDT (And congrats to Andrei: with your last comment the bug turned 100 :)
(In reply to comment #98) > After a forced break I'm back to working on my prototype. Sorry for the late response; I was busy with other things too. > ... > Issue (2) is a bit more involved. The options are: > a) null annotations of a super method automatically affect all overriding > methods (unless explicitely redefined within the rules of conformance). > b) overriding methods must explicitly repeat the null annotations of the > overridden method (with the same option of redefinition). > In terms of reducing the effort of adding necessary annotations to the code > I strongly favor (a) and that's what I've implemented from the beginning. > Here Patrice said that he prefers explicit annotations for the sake of > visibility. I'd like to counter that inserting too many annotations > might be bad for the visibility of the actual code :-/ IMHO, the adoption of non-null by default is what will do the most to counter having to insert too many annotations. > Within the IDE I would prefer other visualizations like hovers to show the > effective null contract (incl. inheritance and applicable defaults). I guess that you were against the addition of the @Override annotation then? ;-) Keep in mind that code just as often shows up in course notes, tutorials, books etc. and having explicit annotations (like @Override) in code that isn't being viewed via an IDE is very useful. > Please have your say on automatic versus manual inheritance of null > annotations. Thanks. Actually, come to think of it, there is something more fundamental than explicitness of annotations at stake here. I think that part of the issue is that you are seeing nullity annotations as sugar for particular assertions in contracts. Although this was originally how nullity annotations were treated in JML, this is no longer the case (and has been so for years now). In fact, Eiffel, JML, Spec#, JastAdd, Nice and the Ernst's Nullness-checker (built upon the Checker Framework) all define the meaning of nullity annotations via a type system (e.g. called a non-null type system in JML, Spec# and JastAdd; called a Nullness type system in Ernst's Nullness-checker). It may be better to adapt the terminology used in http://wiki.eclipse.org/JDT_Core/Null_Analysis to refer to a non-null type system rather than "null contracts". Several researchers have given non-null types and the corresponding use of nullity annotations careful thought including (but not limited to) Michael Ernst, Manuel Fahndrich, Rustan Leino, and myself. It is important to note that *none* of the previously mentioned languages / tools do what you have called "null contract inheritance" -- and IMHO, when one views this issue under the perspective of a non-null type system, this becomes the only approach that is consistent with the way Java (C# and Eiffel) handle types. Also note that Java currently only permits: * Covariance for method return types and, * Invariance for parameters. In the rules that you have defined, you support contravariance for parameters. In JML, we decided to be consistent with the Java rules; hence enforcing non-null type invariance of parameters. Spec# and JastAdd enforce invariance in both cases. Eiffel, Nice and the Nullness-checker allow contravariance for parameters. References to the languages and tools mentioned above can be found in: P. Chalin, P. R. James, and F. Rioux, "Reducing the Use of Nullable Types through Non-null by Default and Monotonic Non-null", IET Software Journal, 2(6):515-531, 2008. DOI: http://dx.doi.org/10.1049/iet-sen:20080010, or if you cannot access that, here is a pre-print: http://encs.concordia.ca/~chalin/papers/ChalinJamesRioux08-IET-Sw-Nonnull-preprint.pfd
I concur with Patrice's comments. He nicely emphasized the need to put the implementation on a firm theoretical foundation, which is a cost-effective way to avoid implementation surprises and mistakes. As a minor point, I would add that to reduce annotation burden, the "non-null-except-locals" default is even more effective than the "non-null" default (which, as Patrice notes, is far more effective than the "nullable" default).
Patrice, Mike, Thanks for your comments. I do believe we agree on most issues at hand. E.g. everybody seems to agree that nonnull-by-default actually means ...-exect-locals. The one issue I believe we need to discuss is type-system vs. contracts. Both views do have sufficient theoretical foundation I'm sure :) To me the difference is in organizational matters not so much in semantics, given that all these can be mapped down to Hoare logic etc. E.g.: In my model we have the option that some program elements have *no* contract attached, whereas the type-system based approaches tend to saying each type is either nullable or non-null (tertium non datur). Also part of my motivation for speaking about contracts has an organizational touch: The type system is part of Java, changing the typing rules is changing Java, which the JDT cannot do. Using a notion of contracts, however, it is easier to perceive this as a pure addition. All existing Java programs are still valid programs. This is a fragment of a valid Java program: Object foo() { return null; } Object result = foo(); System.out.println(result.toString()); From my understanding the type-system approaches will have no means to actually compile this program because we cannot specify a return type for foo() that will render all statements legal. You and me know it is a wrong program, but a Java compiler *must* compile it without errors. In my approach, the above can be compiled as long as it is unaffected by a nullness default. Once we apply a default (per global setting or by a default-annotation) we force ourselves to clean up. I hold it to be essential, however, that the compiler be able to compile projects where some parts are already blessed by annotations and others still apply the unsafe Java type-system without the safety net of any contracts. Apart from these regards of compatibility with Java (which is a must), do you see further consequences of choosing contracts over types system or vice versa? What exactly are the implications? I understand the issue of contravariant parameters. Within the type-system view non-variant parameters better fit for Java, I agree. More implications?
(In reply to comment #104) > Thanks for your comments. I do believe we agree on most issues at hand. > E.g. everybody seems to agree that nonnull-by-default actually means > ...-exect-locals. Yes indeed. > The one issue I believe we need to discuss is type-system vs. contracts. > Both views do have sufficient theoretical foundation I'm sure :) > > To me the difference is in organizational matters not so much in semantics, Organizationally (as well as how the feature is explained to end users/developers), the approach can be different from the chosen base semantics. E.g., in JmlEclipse, our semantics is based on a non-null type system and yet, the delta we implemented over the JDT core does not affect the type system modules. > ... Also part of my motivation for speaking about contracts has an > organizational touch: The type system is part of Java, changing the > typing rules is changing Java, which the JDT cannot do. > Using a notion of contracts, however, it is easier to perceive this as > a pure addition. IMHO one can have a convincing argument that either approach (type-system vs. contract) is a clean addition. E.g., nullity checking via the checkers framework is a pure addition. > All existing Java programs are still valid programs. > > This is a fragment of a valid Java program: > > Object foo() { return null; } > > Object result = foo(); > System.out.println(result.toString()); > > From my understanding the type-system approaches will have no means to > actually compile this program because we cannot specify a return type for > foo() that will render all statements legal. > You and me know it is a wrong program, but a Java compiler *must* compile > it without errors. Consider a very similar example: Object result = null; System.out.println(result.toString()); The JLS does not specify that nullity flow analysis is required. Hence this code fragment is legal Java (it compiles without errors) and yet, if you had the current JDT null checks enabled, it will be flagged as an error. Is this to be an argument against the already implemented null checks in the JDT? I think not. The same can be said of the code snippet you provided. If a developer doesn’t want potential nullity problems to be flagged, then he/she can turn off the compiler option. > In my approach, the above can be compiled as long as it is unaffected by > a nullness default. Once we apply a default (per global setting or by a > default-annotation) we force ourselves to clean up. I hold it to be > essential, however, that the compiler be able to compile projects where > some parts are already blessed by annotations and others still apply the > unsafe Java type-system without the safety net of any contracts. In the DSRG we have been through the exercise of porting fairly large code bases to make use of the benefits of nullity checking. In fact, we have done this for the JDT core (a few years back). As you know, the strategy is to leave the compiler option to nullable-by-default, which matches the semantics of Java, and then convert a small collection of modules (classes or interfaces at a time). > Apart from these regards of compatibility with Java (which is a must), > do you see further consequences of choosing contracts over types system > or vice versa? What exactly are the implications? I understand the issue > of contravariant parameters. Within the type-system view non-variant > parameters better fit for Java, I agree. > More implications? As I mentioned before, in JML we had nullity annotations as sugar for contract assertions, and in the end, use of a type-system base semantics was adopted because it was perceived as conceptually simpler and it avoided pitfalls that arise when you have true support for method contracts and invariants. Let me give you one concrete example that comes to mind. Consider the following class: class C { @Nonnull Integer i = 0; void m() { ...; this.i = null; ...; this.i = 1; ...; } } If we treat nullity annotations as sugar for contract assertions of the form i != null, then the code above is legal because contracts are only checked on entry and on exit from a method. Being able to assign null to a field declared non-null is not usually the semantics developers have in mind :). I have not dove deeply into the feature of nullity checks for a while now but I can, if necessary, can come up with other examples. I am hoping that this (and the fact that most other tools doing nullity checks adopted the type-system-based semantics) is sufficient to convince you of the appropriateness of adopting a type-system-based semantics.
(In reply to comment #105) All makes sense. Still some comments/questions: Re existing null analysis in the JDT compiler: Per default NullReference diagnostics are configured as warnings. PotentialNullReference and RedundantNullCheck are even ignored by default. My (unfounded) assumption is that people use these as "soft" hints by the compiler that something might be wrong. These do not signal that any explicit rule has been violated but rather reflect static inference of anticipated runtime problems. Here it is obvious that we do not change Java, but only provide additional analysis as a courtesy to interested developers. All existing Java programs can still be compiled and no-one will complain. By contrast I intended for null contract violations to be errors by default, because these indicate that a developer has violated an explicit declaration. Hence, I'm extra careful to avoid this being perceived as a "change" of the language. Would you say it's a bad idea to define different defaults for NullReference and annotation based nullity errors? Should all these be just warnings? Re your example: > class C { > @Nonnull Integer i = 0; > void m() { > ...; > this.i = null; > ...; > this.i = 1; > ...; > } > } > > If we treat nullity annotations as sugar for contract assertions of the form i > != null, then the code above is legal because contracts are only checked on > entry and on exit from a method. Being able to assign null to a field declared > non-null is not usually the semantics developers have in mind :). Good point. Here is another example for illustrating my point: class A { B foo(B in) { return in; } } @NonNullByDefault class B { @Nullable Object bar() { return null; } String test(A a) { B b1 = a.foo(null); B b2 = a.foo(this); Object o = b2.bar(); return o.toString(); } } Think of A being legacy code and B currently being cleaned up using null annotations. I want to provide means so that developers see exactly one error/warning against dereferencing o, because we know bar() is designed to potentially return null. Still dereferencing b2 should not be flagged because we strictly don't have sufficient information about foo(). (Maybe foo(null) is the actual bug here? But I don't want to force the developer to edit A while cleaning up B). Don't you think this distinction is justified? The way I understand the type system-based approach we must consider the result of foo() as @Nullable and hence cannot make this distinction between foo() and bar(). If *every* type is either nullable or nonnull how can we achieve exactly the one error/warning, without being flooded with those from usages of legacy code? Would we need new styles of suppressing such warnings? To achieve suppressing the warning on b2 but not the one on o we would need some persistent annotation in A (which means @SuppressWarnings cannot be used for this purpose, since it has @Retention(SOURCE) ). Conversely, we could (implicitly?) mark B as @HasNullAnnotations and avoid client side warnings for any type A that does not have this annotation. (I.e., any class without @HasNullAnnotations is considered as legacy code and invocations of methods from this type will never cause null-warnings). Of course, the same could also be achieved with a third annotation @UnknownNullness, or @UnknownNullnessByDefault, but I think no-one commenting on this bug actually liked that. If we find a nice solution for this issue I'm about to be convinced towards type modifiers and against contracts. Any suggestions?
Setting target to 3.8. Once Java 7 support is done we should continue the discussion/work on this bug.
(In reply to comment #107) > Setting target to 3.8. Once Java 7 support is done we should continue the > discussion/work on this bug. With pleasure! Here's a question to JDT folks: with Patrice I had the discussion whether annotations should be seen as contracts or as part of the type system. The semantics are mostly the same whichever way you look at it. The main difference that I see regards the integration of annotated code with legacy code without annotations. I'm assuming that we must support the following for legacy code used in a project that generally has null annotations enabled: 1. passing null as an argument into a legacy method does not trigger a warning 2. inside a legacy method dereferencing an argument does not trigger a warning 3. inside a legacy method returning null does not trigger a warning 4. dereferencing the result of a legacy method does not trigger a warning Without these rules projects gradually migrating to using null annotations would be spammed with myriads of mostly useless warnings. Do you agree with these assumptions? If so, we need of course a means to distinguish legacy code from "new" annotated code. For this I am (have always been) proposing to use the scoped annotation defaults (per project/package/type), and let everything outside those scopes (i.e., regions where no default applies) be treated as legacy code. I acknowledge that the academic view would rather not accept any default-less (true legacy) code. That POV would not agree on 2. & 4. above, because legacy code would use @NullableByDefault semantics. Therefor, it is important for me to know, what level of support for legacy code (mixed with annotated code) we actually want. Please let me know whether you agree to my assumptions (or ask if I didn't explain well).
In starting to resume work on this issue I removed the support to emulate annotation types that was introduced in comment 42 item (2). Apparently, nobody really liked this feature :) The current status can be found here: http://dev.eclipse.org/viewcvs/viewvc.cgi/trunk/contrib/?root=TOOLS_OBJECTTEAMS at revision 1771.
After speaking to some more researchers at ECOOP (http://2011.ecoop.org) I'm now convinced that there is a broad consensus to consider null annotations as an extension to the type system rather than as contracts (as I did in my previous implementation). To match this consensus I changed all relevant error/warning messages, the new version can be found here: http://dev.eclipse.org/viewcvs/viewvc.cgi/trunk/contrib/org.eclipse.objectteams.jdt.nullity/src/org/eclipse/objectteams/internal/jdt/nullity/problem_messages.properties?view=markup&revision=1774&root=TOOLS_OBJECTTEAMS Previous version speaking of "null contracts" was: http://dev.eclipse.org/viewcvs/viewvc.cgi/trunk/contrib/org.eclipse.objectteams.jdt.nullity/src/org/eclipse/objectteams/internal/jdt/nullity/problem_messages.properties?view=markup&revision=1772&root=TOOLS_OBJECTTEAMS To see these messages in action you might want to look at the tests: http://dev.eclipse.org/viewcvs/viewvc.cgi/trunk/contrib/org.eclipse.objectteams.jdt.nullity.tests/src/org/eclipse/objectteams/jdt/nullity/tests/NullAnnotationTest.java?view=markup&revision=1773&root=TOOLS_OBJECTTEAMS While the core semantics remain unchanged the new scheme should be better suited to extend checking also to fields and finally towards a full sound checking of all null issues. I still maintain a difference against academic approaches by supporting legacy code where (some) types have no null specification (i.e., neither nullable nor nonnull). As mentioned before this is achieved by allowing code regions where no default annotation is applied. Within these regions unannotated types have the standard (legacy) semantics of Java. OTOH, once a default annotation is set for any element (project, package, type) all type references within that element have either nullable or nonnull semantics. Contrary to my previous feelings, I now think that these semantics can be well communicated also using the null-annotations-as-type-system-extension approach, so we actually don't lose anything by dropping the "null-contract" scheme.
One more issue awaiting agreement is whether or not inherited null annotations have to be repeated in an overriding method. (See comment 98 and onward). If null annotations are seen as an extension to the type system then I agree that mentioning a type without the annotation looks unnatural, i.e., null annotations should be seen as part of any type reference, and if no annotation is explicitly stated only the configured default value (@NullableByDefault or @NonNullByDefault) should apply. I have updated the implementation: - no longer implicitly copy null annotations along inheritance - report when an overriding method misses to repeat a null annotation from the overridden method. Bug 353472 (for completion) and a corresponding quickfix should make up for the more verbose code style.
I further unified error messages, see http://dev.eclipse.org/viewcvs/viewvc.cgi/trunk/contrib/org.eclipse.objectteams.jdt.nullity/src/org/eclipse/objectteams/internal/jdt/nullity/problem_messages.properties?view=markup&revision=1820&root=TOOLS_OBJECTTEAMS Corresponding tests using the updated messages are http://dev.eclipse.org/viewcvs/viewvc.cgi/trunk/contrib/org.eclipse.objectteams.jdt.nullity.tests/src/org/eclipse/objectteams/jdt/nullity/tests/NullAnnotationTest.java?view=markup&revision=1819&root=TOOLS_OBJECTTEAMS
(In reply to comment #108) I haven't played with the implementation yet, but these rules look good to me. The handling of legacy code is similar to what we already did for dealing with 1.4 code in JavaCore#COMPILER_PB_UNAVOIDABLE_GENERIC_TYPE_PROBLEMS (don't produce warnings when interfacing with legacy code). > If so, we need of course a means to distinguish legacy code from "new" > annotated code. For this I am (have always been) proposing to use the > scoped annotation defaults (per project/package/type), and let everything > outside those scopes (i.e., regions where no default applies) be treated as > legacy code. I also agree with this. For projects, we have to use a compiler option, while packages and types can use the defaulting annotations. Legacy code will always be part of the game, as long as our nullity annotations are not part of the JLS.
================== UPDATES: =================== I have updated the WIKI page [1] to match the current state of discussions. The explanation using the concept of "contracts" has been moved to a sub-page, the primary concept now being an extension of the type system. Next I have uploaded a new snapshot version of the IMPLEMENTATION [2], that matches the description in the wiki. The implementation should work with any of 3.7 Release, current build towards 3.7.1, or the brand new 3.8 M1 (I guess even 4.2 M1 :). This should mean you can also use null annotations in Java7. Please let me know if anything in the wiki is unclear, and also if you see issues with the implementation. I hope that wiki & impl. together serve as a basis to come to a decision about a first release into CVS (err, GIT) within one of the next milestone cycles? [1] http://wiki.eclipse.org/JDT_Core/Null_Analysis [2] http://download.eclipse.org/objectteams/updates/contrib Instructions are in the wiki PS: If you want to see a demonstration of the tool, why not drop a comment at http://eclipsecon.org/europe2011/sessions/bye-bye-npe :)
After discussions at EclipseCon Europe we're planning to release this for 3.8M4. At that stage this will include: - support for @NonNull and @Nullable (names are configurable), applicable to method parameters & return and to locals - support for @NonNullByDefault(boolean), see bug 331647 comment 15 - we may already include basic handling of annotations for fields, subject to coordination with bug 247564, we may, however, have to move this item to M5. Bug 331651 will be next on the agenda, but some more discussion is needed before we can commit to a strategy there. Some more details: - the warning token "nullcontract" will be dropped, @SuppressWarnings("null") will cover all new diagnostics, too. - annotations will be @Documented to show in the Javadoc. - hovers should show annotations including those that implicitly apply by means of a default.
Created attachment 206525 [details] tests & implementation v8 I've transformed the implementation from OT/J back to plain old Java (sigh :) ) The patch currently causes a few regressions in JDT/Core tests, but it can already be used for + code reviewing, and + work on quickfixes (I will attach a matching version of my quickfixes to the corresponding bug shortly). (In reply to comment #115) > After discussions at EclipseCon Europe we're planning to release this > for 3.8M4. At that stage this will include: > > - support for @NonNull and @Nullable (names are configurable), > applicable to method parameters & return and to locals DONE > - support for @NonNullByDefault(boolean), see bug 331647 comment 15 DONE (incl. warning if null-annotation is redundant due to a default). > Some more details: > - the warning token "nullcontract" will be dropped, > @SuppressWarnings("null") will cover all new diagnostics, too. DONE More details relevant for reviewing this patch: ----------------------------------------------- Additions in ast.* should for the most part be self-explanatory, but: + binding arguments has to happen earlier now, since this is where annotations are resolved and these are needed to analyze calls to the given method (independent of how far the declaring class has been translated). -> this change wrongly triggered forwardReference() in EnumTest.test180() fixed by a change in QualifiedNameReference (this connection may not be obvious). Remaining regressions - 1 in ASTConverter15Test.test0230() - 1 in CompletionTests.testCompletionVariableName32() - 1 in CompilerInvocationTests (trivial to fix) I'll look into those shortly. Missing currently: new constants in JavaCore.
(In reply to comment #116) > Remaining regressions > - 1 in ASTConverter15Test.test0230() At a closer look this test now changed to producing in fact the desired outcome. The test was introduced in bug 156352 to check an NPE fix. The full story behind this issue is, however, still unresolved: bug 164660. That group of bugs deals with the tension between performance and getting complete binding information. This balance is changed by my latest patch because now whenever a call to a source method is resolved this triggers resolving the type's annotations (in order to find @NonNullByDefault). Note, that this is only a partial solution to the matter of completeness. OTOH, I hope that this additional resolving doesn't noticeably impact performance. Since resolving is performed for a SourceTypeBinding I don't think we will (again) run into issues of indirect dependencies, but let's keep an eye on it. I will adjust the expected result from test0230() to the "right outcome".
Created attachment 206557 [details] test & implementation v9 Several small corrections: (In reply to comment #116) > Remaining regressions > - 1 in ASTConverter15Test.test0230() Fix is included in this patch. > - 1 in CompletionTests.testCompletionVariableName32() This revealed a brittle assumption in Scanner.getLineEnd(int): this.lineEnds.length is only useful during Parser.getMethodBodies(). However, due to calling bindArguments() earlier, we hit the completion node *before* we even call getMethodBodies. Fixed by using the more reliable field linePtr instead. Additionally, GenericTypeTest.test1461() showed slightly different behavior. The old behavior has been restored by not initializing the package org.eclipse.jdt.annotation if annotation based null analysis is not enabled. > Missing currently: new constants in JavaCore. Those constants that are needed for quickfixes are now included again. This patch should be considered complete at this point - usable for review and UI work. Remaining polish includes - a few more constants in JavaCore - fixing CompilerInvocationTests - a few details regarding the warning for redundant/not-applicable annotations - copyright update
Ayush, please review at line level, I'll review this at a high level.
Some very early review comments: 1) The error message strings in messages.properties which contain '@{0} {1}' should have proper escaping i.e. ''@{0} {1}'' (two apostrophes) 2) The string 918 and 919 are same Missing null annotation: inherited method from {1} declares this parameter as @{2} Do we really need to duplicate this string? We can just create a generic problem id /** @since 3.8 */ int ParameterLackingNullAnnotation = MethodRelated + 918; 3) Problem 913 = Buildpath problem: the type {0} which is configured as a null annotation type cannot be resolved should instead be Buildpath problem: the type {0}, which is configured as a null annotation type, cannot be resolved 4) 922 = The method {0} from class {1} cannot implement the corresponding method from type {2} due to incompatible nullness constraints should be changed to 922 = The method {0} from class {1} cannot implement the corresponding method from superclass {2} due to incompatible nullness constraints (No need to make any change now, you can do these while preparing the final patch.)
(In reply to comment #120) > Some very early review comments: > 1) The error message strings in messages.properties which contain '@{0} {1}' > should have proper escaping i.e. ''@{0} {1}'' (two apostrophes) sure. > 2) The string 918 and 919 are same > Missing null annotation: inherited method from {1} declares this parameter as > @{2} > Do we really need to duplicate this string? We can just create a generic > problem id > /** @since 3.8 */ > int ParameterLackingNullAnnotation = MethodRelated + 918; I agree this looks funny in messages.properties. In IProblem I have: /** @since 3.8 */ int ParameterLackingNonNullAnnotation = MethodRelated + 918; /** @since 3.8 */ int ParameterLackingNullableAnnotation = MethodRelated + 919; This difference *could* be used by quickfixes. Once unified we can no longer distinguish because the strings @Nullable vs. @NonNull (substitutions for {1}) are not constants, but configurable. > 3) Problem 913 = Buildpath problem: the type {0} which is configured as a null > annotation type cannot be resolved > should instead be > Buildpath problem: the type {0}, which is configured as a null annotation type, > cannot be resolved OK > 4) 922 = The method {0} from class {1} cannot implement the corresponding > method from type {2} due to incompatible nullness constraints > should be changed to > 922 = The method {0} from class {1} cannot implement the corresponding method > from superclass {2} due to incompatible nullness constraints Watchout, {2} is an interface not a superclass :) For consistency with 916-919, should we just say 922 = The method {0} from {1} cannot implement the corresponding method from {2} due to incompatible nullness constraints ?
I agree with Stephan that we should use separate problem IDs for logically distinct problems. Could we change the messages to "Missing non-null annotation: ..." and "Missing nullable annotation: ..."? That would have the added benefit that they can be distinguished in the Problem view and if a user copy/pastes the message.
> This difference *could* be used by quickfixes. Once unified we can no longer > distinguish because the strings @Nullable vs. @NonNull (substitutions for {1}) > are not constants, but configurable. Yup. I agree with Markus' suggestion above. I was looking at "missing annotation" as an independent problem. Didn't think of quick fixes. :) > For consistency with 916-919, should we just say > 922 = The method {0} from {1} cannot implement the corresponding method > from {2} due to incompatible nullness constraints > ? Yeah, sounds good.
(In reply to comment #122) > I agree with Stephan that we should use separate problem IDs for logically > distinct problems. Could we change the messages to > "Missing non-null annotation: ..." and "Missing nullable annotation: ..."? > That would have the added benefit that they can be distinguished in the Problem > view and if a user copy/pastes the message. Certainly could, but note that the fully expanded message reads like Missing null annotation: inherited method from Lib declares this parameter as @Nullable i.e., the configured annotation name is already inserted at the end. Considering your proposal and a different configuration we could end up with: Missing nullable annotation: inherited method from Lib declares this parameter as @MaybeNull What do you think?
> Missing nullable annotation: inherited method from Lib declares this parameter > as @MaybeNull Yes, that was a side goal. The "Missing nullable annotation" really tells what's going on in a higher-level way, even if the concrete annotation can vary from project to project.
I got this NPE java.lang.NullPointerException at org.eclipse.jdt.core.dom.CompilationUnitResolver.resolve(CompilationUnitResolver.java:692) when I put an invalid name org.eclipse.jdt.core.compiler.annotation.nonnull=invalid for the non null annotation. Is this fixed in your latest effort to avoid eager resolution of bindings?
(In reply to comment #126) > I got this NPE > java.lang.NullPointerException > at > org.eclipse.jdt.core.dom.CompilationUnitResolver.resolve(CompilationUnitResolver.java:692) > when I put an invalid name > org.eclipse.jdt.core.compiler.annotation.nonnull=invalid > for the non null annotation. Is this fixed in your latest effort to avoid eager > resolution of bindings? No, this one is new. Good catch. I'll add another test class for checking corner-cases via DOM. Actually for CompilationUnitProblemFinder I already had a few tests (just forgot to migrate them into the patch). I'll post both tests later today. How did you set the option to "invalid", by editing the .settings file? When doing so programmatically, you should see Cannot use the unqualified name 'invalid' as an annotation name for null specification
(In reply to comment #127) > How did you set the option to "invalid", by editing the .settings file? > When doing so programmatically, you should see > Cannot use the unqualified name 'invalid' as an annotation name for null > specification Yeah i did both ways. :)
(In reply to comment #126) > I got this NPE > java.lang.NullPointerException > at > org.eclipse.jdt.core.dom.CompilationUnitResolver.resolve(CompilationUnitResolver.java:692) > when I put an invalid name > org.eclipse.jdt.core.compiler.annotation.nonnull=invalid > for the non null annotation. Is this fixed in your latest effort to avoid eager > resolution of bindings? Actually, the problem is only vaguely connected to null annotations. If filed bug 363858 to track this NPE.
These are the improvements I have in my local repo, to be included in the next patch: - fixed all intermediate regressions - added missing constants in JavaCore - new error for null-annotation on primitive type (incl. void): The nullness annotation @{0} is not applicable for the primitive type {1} (see also bug 331647 comment 17). - copyright update - changed error messages according to comment 120 (items 1+3), comment 122, comment 123 - fix for bug 363858 (reported in comment 126) plus more tests - split bindArguments() into two phases to enable early resolving of annotations while avoiding unnecessary lookup at that early point. Let me know whether you want to see a full new patch or an incremental patch on top of the previous one.
(In reply to comment #130) > Let me know whether you want to see a full new patch or an incremental > patch on top of the previous one. I am trying to clear my M4 tagged defects this week, so that I can devote a good chunk of my time next week to reviewing this implementation. I would personally prefer a full patch while starting to review.
(In reply to comment #130) > Let me know whether you want to see a full new patch or an incremental > patch on top of the previous one. The git way to do this would be to release your work into a new branch and once everything works as expected in that branch, do a merge into the master branch. This makes it much easier to handle fixes and improvements vs patches.
I noticed that we create a package binding in the lookup environment based on the name of the null annotations specified in the options, which may have side effects. So, consider this concocted case: package p; import nullAnn.*; // 1 public class Test { /** * @param args */ void foo(@nullAnn.Nullable Object o) { // 2 o.toString(); } } And the .prefs file has the following entry org.eclipse.jdt.core.compiler.annotation.nullable = nullAnn.Nullable Let this entry be invalid i.e. no package nullAnn exists and no type Nullable exists anywhere in the classpath. In that case, the compiler accepts the import statement, even though it is not right. Even the error at (2) above is "nullAnn.Nullable cannot be resolved to a type". Without the invalid entry in the .prefs file though everything works fine and compiler rejects the import statement and at 2 says, "nullAnn cannot be resolved to a type" with a helpful quick fix.
(In reply to comment #133) > I noticed that we create a package binding in the lookup environment based on > the name of the null annotations specified in the options, which may have side > effects. Good point. We should make a clear decision whether those configured annotation types are initialized eagerly, oder lazily (current impl has an unhappy mix of both): - eager: as soon as the LookupEnvironment is initialized try to load all three configured annotation types, report immediately if s.t. cannot be found. In this case users who have only @NonNull & @Nullable but not @NonNullByDefault may need to specify the latter as "none" to prevent loading or - lazy: I have a two-step solution in my local repo: when creating a PackageBinding check if it matches the package from any of the configured annotation types, mark it as such (by storing the simple names of those annotation types). When loading a type from a marked package binding check if it is one of the configured annotation types and set the corresponding typeId. (two-step approach helps avoiding perf.-penalty on every type lookup). I'd normally lean towards lazy, but in this case a misconfigured annotation type may actually go unnoticed if the type is never used in the code. Is that OK or bad?
> I'd normally lean towards lazy, but in this case a misconfigured annotation > type may actually go unnoticed if the type is never used in the code. > Is that OK or bad? Yes, the lazy one is much better. However, even in this way every time a package is resolved we'll compare it to the specified annotations to see if its one with annotation types. This may also be expensive, no? Though I don't think that not detecting a bad annotation till it is used is really bad.
(In reply to comment #135) > > I'd normally lean towards lazy, but in this case a misconfigured annotation > > type may actually go unnoticed if the type is never used in the code. > > Is that OK or bad? > > Yes, the lazy one is much better. However, even in this way every time a > package is resolved we'll compare it to the specified annotations to see if its > one with annotation types. This may also be expensive, no? It's basically 3 name comparisons per package :) My point was just to avoid doing these checks on every *type*. > Though I don't think > that not detecting a bad annotation till it is used is really bad. OK, I'll keep the lazy solution then. BTW, after I already have a fix for bug 363858 (reported in comment 126) the lazy variant actually avoids that we enter that path in the first place :)
Created attachment 207270 [details] test & implementation v12 This patch contains the improvements listed in comment 130 plus: + New error for null-annotations on primitive types + More lazily initialize null-annotation package(s) to fix issue from comment 133 + Improved reporting of configuration errors (see also bug 363858)
(In reply to comment #137) > Created attachment 207270 [details] > test & implementation v12 Stephan, there's an example.jar file here right? Can you attach it here separately?
Created attachment 207308 [details] example.jar used in a test (In reply to comment #138) > (In reply to comment #137) > > Created attachment 207270 [details] [details] > > test & implementation v12 > > Stephan, there's an example.jar file here right? Can you attach it here > separately? Sure, it sits in org.eclipse.jdt.core.tests.model/workspace/NullAnnotations/lib/example.jar
I'm having a bit of trouble understanding the resolution process for annotations. I would expect no changes in the current way annotations are resolved. Can't we just feed into the existing mechanism, and compare each annotation to the configured null annotation? If it matches, then set a flag and move on. So lets start from the beginning: 1. A package is initialized as null annotation package through org.eclipse.jdt.internal.compiler.lookup.PackageBinding.initNullAnnotationPackage(). What is the motivation for this? What if all my annotations come from different packages (not an ideal case but we can never predict a user's idiosyncrasies). Why is any treatment needed for the annotation package at all? 2. What is the need for an added createArgumentBindings() call inside org.eclipse.jdt.internal.compiler.lookup.SourceTypeBinding.resolveTypesFor(MethodBinding) ? The usual way of resolving arguments and the annotations on each argument doesn't work? 3. IMHO, org.eclipse.jdt.internal.compiler.lookup.PackageBinding.setupNullAnnotationType(ReferenceBinding) should be moved to Annotation.resolveType, right after the typeBinding is obtained. Basically, can we first resolve the annotation like we would any other normal annotation, and then after getting the binding, compare fully qualified name to that given in the .prefs. If this matches, set a flag on the annotation. (Maybe this is already happening and I'm overlooking it. In that case let me know.)
(In reply to comment #140) > 3. IMHO, > org.eclipse.jdt.internal.compiler.lookup.PackageBinding.setupNullAnnotationType(ReferenceBinding) > should be moved to Annotation.resolveType, right after the typeBinding is > obtained. Here's a suggestion: Every time we construct a package binding, compare its name to the lookup environments' nullAnnotationPackageNames. If it matches, set a flag on the package binding saying it contains some null annotation. Now, in Annotation.resolveType when the typeBinding is obtained and if its fPackage field points to a package containing an annotation, compare the annotation's name to the names defined in global options. If this matches, we found a null annotation. Also, maintain a static field in Annotation to be set when all 3 null annotations (nonnull, null, nonnull by default) have been found. We stop looking once this is done.
(In reply to comment #140) > I'm having a bit of trouble understanding the resolution process for > annotations. Yes, that's a tricky aspect of the implementation. One of the driving scenarious is this: class B { void bar (A a) { a.foo(null); } } class A { void foo (@NonNull String arg) {} } We are resolving the call a.foo(null). This triggers "A".resolveTypesFor("foo") while A has not been fully resolved This triggers resolving of "String" but without the patch this does not resolve the annotation (this only happens later during bindArguments()) -> foo(null) cannot be flagged. Only by resolving argument annotations from resolveTypesFor can we ensure that we have all information required for detecting null problems against foo(null). > So lets start from the beginning: > > 1. A package is initialized as null annotation package through > org.eclipse.jdt.internal.compiler.lookup.PackageBinding.initNullAnnotationPackage(). > What is the motivation for this? What if all my annotations come from different > packages (not an ideal case but we can never predict a user's idiosyncrasies). > Why is any treatment needed for the annotation package at all? It's not strictly needed, but designed as an optimization so that most types do *not* need the additional checks whether they are one of the configured annotation types (assuming we see fewer packages than types :) > 2. What is the need for an added createArgumentBindings() call inside > org.eclipse.jdt.internal.compiler.lookup.SourceTypeBinding.resolveTypesFor(MethodBinding) > ? The usual way of resolving arguments and the annotations on each argument > doesn't work? To add early resolving of argument annotations as mentioned above. Plus filling in information from the applicable default if any. > 3. IMHO, > org.eclipse.jdt.internal.compiler.lookup.PackageBinding.setupNullAnnotationType(ReferenceBinding) > should be moved to Annotation.resolveType, right after the typeBinding is > obtained. I'm not sure what you want to improve here? Fix wrong behaviour? Simplify code? Improve performance? Let me know if matters are still unclear.
(In reply to comment #142) > (In reply to comment #140) > > > 3. IMHO, > > org.eclipse.jdt.internal.compiler.lookup.PackageBinding.setupNullAnnotationType(ReferenceBinding) > > should be moved to Annotation.resolveType, right after the typeBinding is > > obtained. > > Here's a suggestion: Every time we construct a package binding, compare its > name to the lookup environments' nullAnnotationPackageNames. Done on the call chain * PackageBinding.<init> * PackageBinding.initNullAnnotationPackage() * LookupEnvironment.getNullAnnotationNames(char[][]) > If it matches, set > a flag on the package binding saying it contains some null annotation. Done by storing the simple names returned from getNullAnnotationNames(..) > Now, in Annotation.resolveType when the typeBinding is obtained and if its > fPackage field points to a package containing an annotation, compare the > annotation's name to the names defined in global options. > If this matches, we found a null annotation. Differences to current implementation: - location: Annotation vs. PackageBinding - filtering: currently, setupNulAnnotationType inspects all types, should only work for annotation types (could be fixed in either approach) Does Annotation.resolveType have access to the global options? I tried keeping everything related to these options close to class LookupEnvironment. If you see benefit in doing this in Annotation instead this should be possible. > Also, maintain a static field in Annotation to be set > when all 3 null annotations (nonnull, null, nonnull by default) have been > found. We stop looking once this is done. Good point. Equally I can null out the added fields in PackageBinding once matched.
(In reply to comment #143) > I'm not sure what you want to improve here? > Fix wrong behaviour? Simplify code? Improve performance? Basically we should not add extra fields to bindings unless there's no other option. Also, logically it doesn't seem right that each package should be bothered with storing 3 names. > Does Annotation.resolveType have access to the global options? > I tried keeping everything related to these options close to class > LookupEnvironment. If you see benefit in doing this in Annotation instead > this should be possible. Yes, the scope parameter in resolveType(scope) has the lookupEnvironment via the environment() method. From this you can get global options. The benefit in doing so is that the resulting implementation is easier to understand and it only looks logical to me that the resolving of annotations should be not be affected by null-annotation specific stuff. Once the annotation is resolved, we can check for it being a special null annotation. :)
A couple of more things: 1) In org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.scanMethodForNullAnnotation(IBinaryMethod, MethodBinding) and scanTypeForNullAnnotation(..), the constant org.eclipse.jdt.core.Signature.C_RESOLVED should be used instead of character 'L'. 2) In PackageBinding.setupAnnotations(..), the id's for annotations are being overriden. Is that intentional or did you mean to OR them? Also, Why is org.eclipse.jdt.core.compiler.annotation.nullablebydefault not handled? 3)If I use org.eclipse.jdt.core.compiler.annotation.nullablebydefault annotation to say, a type, it has no effect on the return type. Consider: AB.java -------------%<-------------------------------- package p; @org.eclipse.jdt.annotation.NullableByDefault public class AB { // @org.eclipse.jdt.annotation.Nullable public Object foo (String arg) { return new Object();} } B.java ------------%<------------------------------- package p; class B { void bar (AB a) { Object abc = a.foo(""); abc.toString(); // expecting potential null here } } In the above case I don't see a warning in B.java, unless i uncomment the annotation on return type of AB.foo(). I also see no redundant annotation warning if i prefix the String parameter in foo by @org.eclipse.jdt.annotation.Nullable
I couldn't find any usage of org.eclipse.jdt.internal.compiler.ast.AbstractMethodDeclaration.analyseArgumentNullity(FlowInfo). Is it a leftover of some refactoring?
> 2) In PackageBinding.setupAnnotations(..), the id's for annotations are being > overriden. Is that intentional or did you mean to OR them? A related point. I see at a few places tag bits are used and at few places the id field is used to recognize if the annotation is a null annotations. In org.eclipse.jdt.internal.compiler.lookup.SourceTypeBinding.createArgumentBindings(MethodBinding), org.eclipse.jdt.internal.compiler.problem.ProblemReporter.illegalRedefinitionToNonNullParameter(Argument, ReferenceBinding, char[][]), the id is used where I think it'll be better to use tagBits.
>In >org.eclipse.jdt.internal.compiler.lookup.SourceTypeBinding.createArgumentBindings(MethodBinding), Sorry, ignore this. Here use of id is the right thing.
Easy answers first: (In reply to comment #146) > A couple of more things: > 1) In > org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.scanMethodForNullAnnotation(IBinaryMethod, > MethodBinding) and scanTypeForNullAnnotation(..), the constant > org.eclipse.jdt.core.Signature.C_RESOLVED should be used instead of character > 'L'. Even better: org.eclipse.jdt.internal.compiler.util.Util since org.eclipse.jdt.core.Signature isn't available for ecj :) While we're at it: should I make the same change also for other occurrences of 'L' within BinaryTypeBinding? > 2) In PackageBinding.setupAnnotations(..), the id's for annotations are being > overriden. Is that intentional or did you mean to OR them? You mean setupNullAnnotationType? TypeIds are consecutive numbers, not bits that could be ORed. Are we looking at different things? > Also, Why is org.eclipse.jdt.core.compiler.annotation.nullablebydefault not > handled? nullablebydefault has been dropped as of patch v8 (see bug 331647 comment 15). Did you still see traces of it anywhere? > 3)If I use org.eclipse.jdt.core.compiler.annotation.nullablebydefault > annotation to say, a type, it has no effect on the return type. Consider: > AB.java > -------------%<-------------------------------- > package p; > > @org.eclipse.jdt.annotation.NullableByDefault > public class AB { > // @org.eclipse.jdt.annotation.Nullable > public Object foo (String arg) { > return new Object();} > } > > B.java > ------------%<------------------------------- > package p; > > class B { void bar (AB a) { > Object abc = a.foo(""); > abc.toString(); // expecting potential null here > } > } > > In the above case I don't see a warning in B.java, unless i uncomment the > annotation on return type of AB.foo(). I also see no redundant annotation > warning if i prefix the String parameter in foo by > @org.eclipse.jdt.annotation.Nullable See above: @NullableByDefault is no longer interpreted.
(In reply to comment #147) > I couldn't find any usage of > org.eclipse.jdt.internal.compiler.ast.AbstractMethodDeclaration.analyseArgumentNullity(FlowInfo). > Is it a leftover of some refactoring? You hit the nail on the head. Dead code, to be removed.
As documentation for future readers: (In reply to comment #148) > > 2) In PackageBinding.setupAnnotations(..), the id's for annotations are being > > overriden. Is that intentional or did you mean to OR them? > A related point. I see at a few places tag bits are used and at few places the > id field is used to recognize if the annotation is a null annotations. In > org.eclipse.jdt.internal.compiler.lookup.SourceTypeBinding.createArgumentBindings(MethodBinding), > org.eclipse.jdt.internal.compiler.problem.ProblemReporter.illegalRedefinitionToNonNullParameter(Argument, > ReferenceBinding, char[][]), the id is used where I think it'll be better to > use tagBits. (In reply to comment #149) > >In > >org.eclipse.jdt.internal.compiler.lookup.SourceTypeBinding.createArgumentBindings(MethodBinding), > Sorry, ignore this. Here use of id is the right thing. When looking at an annotation type binding use the id from TypeIds (int ID) When looking at an annotated element use the tagBits (long bitset - ORable).
> While we're at it: should I make the same change also for other occurrences > of 'L' within BinaryTypeBinding? Yup, sure! > nullablebydefault has been dropped as of patch v8 (see bug 331647 comment 15). > Did you still see traces of it anywhere? Ah! I see. Its still mentioned on the wiki page so I missed the comment. No, I didn't see any traces anywhere, and thats why was curious.
Created attachment 207478 [details] proposed annotations bundle Here's a proposed bundle for shiping the necessary annotations. It's a plain OSGi bundle for easiest consumption in OSGi projects, while non-OSGi projects can still consume as a regular jar. The contained doc/ folder will not be persisted, I just included it so reviewers can quickly look at the generated javadoc. FYI: the deployed bundle will be roughly 6 kByte.
(In reply to comment #154) > Created attachment 207478 [details] > proposed annotations bundle > > Here's a proposed bundle for shiping the necessary annotations. > It's a plain OSGi bundle for easiest consumption in OSGi projects, > while non-OSGi projects can still consume as a regular jar. > > The contained doc/ folder will not be persisted, I just included it > so reviewers can quickly look at the generated javadoc. > > FYI: the deployed bundle will be roughly 6 kByte. Satyam, please review this part, Ayush will be the reviewer for the core implementation as before. (In reply to comment #119) > Ayush, please review at line level, I'll review this at a high level. I am squeezed for time and don't expect to be able to review this. So Ayush, you will be the sole reviewer. In liue, I'll sign up verify the implementation when it comes up for testing. Jay, please assign this to me for verification.
Stephan, I know there has been extensive discussion you have had with the UI team and even some amount of prototype code you wrote to show the quickfixes, preferences and such that would be needed from UI. Did we open a defect against the UI project to spell out/capture these requirements ? If not, can you please do so ? Thanks.
(In reply to comment #154) > Created attachment 207478 [details] > proposed annotations bundle > > Here's a proposed bundle for shiping the necessary annotations. > It's a plain OSGi bundle for easiest consumption in OSGi projects, > while non-OSGi projects can still consume as a regular jar. > > The contained doc/ folder will not be persisted, I just included it > so reviewers can quickly look at the generated javadoc. > > FYI: the deployed bundle will be roughly 6 kByte. Stephan, The new bundle looks good to me. I agree it is better to be a bundle than a fragment. Please also make the changes to the feature too so that Srikanth can commit it when done. I haven't seen the doc completely, but is it just the javadoc?
(In reply to comment #156) > Stephan, I know there has been extensive discussion you have had with > the UI team and even some amount of prototype code you wrote to show > the quickfixes, preferences and such that would be needed from UI. > > Did we open a defect against the UI project to spell out/capture these > requirements ? If not, can you please do so ? Thanks. The bug for quickfixes is: bug 337977, into which I dumped my prototypical implementation of quickfixes. I also just now filed bug 364815 for the preferences.
(In reply to comment #157) > I haven't seen the doc completely, but is it just the javadoc? Yes, just javadoc. I guess that any further documentation / howtos should go into the user guide / help, so I just filed bug 364820.
Created attachment 207565 [details] required change to feature.xml (In reply to comment #157) > Stephan, The new bundle looks good to me. I agree it is better to be a bundle > than a fragment. Please also make the changes to the feature too so that > Srikanth can commit it when done. Sure, that's an easy one (attached). I only made one observation when exporting the feature from the IDE: The feature references a license feature which I could find nowhere in any git repository. But I'm sure Kim has it somewhere for the build :)
(In reply to comment #153) > > While we're at it: should I make the same change also for other occurrences > > of 'L' within BinaryTypeBinding? > Yup, sure! Those are actually quite a few places where constants could be used instead of literals. I created bug 364890 to avoid polluting this patch.
Created attachment 207567 [details] test & implementation v14 This patch-of-the-week brings the following: - leverage the previously dangling new method AbstractMethodDeclaration.analyseArgumentNullity to reduce code duplication (cf. comment 147) - replace char literals with constants from class Util (comment 146 (1)) - improve initialization of configured annotation types: - fewer fields in PackageBinding - don't fiddle with simple vs. qualified names - avoid checks if we know a type is not an annotation type - drop special treatment of null annotation configured by the simple name -> error reporting no longer throws AbortCompilation - revert further manipulation of how CAT_BUILDPATH errors are reported - this re-introduces small randomness in where errors are shown in the UI (Problems view vs. editor, against the project vs. against some CUs) - reduces risks of unintended side-effects * only keep changes that helped avoid NPE and unreportable errors during development, just in case similar situations will arise again. (See CompilationUnitResolver & Compiler) - cleanup: whitespace and a few minor comment issues.
Created attachment 207573 [details] test & fix for a DOM AST issue Once more reading through old notes I found a patch to ASTConverter that had gone lost during some process: In order to persist applications of a non-null default, I'm generating @NonNull annotations for all affected methods/arguments. Care must be taken that these are skipped by the ASTConverter. Patch contains a test and the fix (on top of patch v14).
> In order to persist applications of a non-null default, I'm generating > @NonNull annotations for all affected methods/arguments. > Care must be taken that these are skipped by the ASTConverter. I had actually missed this part earlier. I didn't understand the motivation behind propogating the defaults to every method and parameter, right from the package or the type default. I can see that org.eclipse.jdt.internal.compiler.lookup.SourceTypeBinding.findDefaultNullness(MethodBinding, LookupEnvironment) finds the binding of the default annotation and propogates it to each method via org.eclipse.jdt.internal.compiler.lookup.MethodBinding.fillInDefaultNonNullness(TypeBinding). And then these artificially added annotations also make it to the class file. Is there any need for this and is it safe to have these extra annotations in the class file. I was thinking of the following approach to propogate nullness info: Read the default null info and set a bit, such as ExtraCompilerModifiers.AccDefaultNull on the binding. Then propogate this bit to each method and parameter in the scope. This will also avoid storing a heavy type binding inside each SourceTypeBinding. Does this not suffice? (The handling of @Deprecated annotation can be taken as a precedent here).
(In reply to comment #164) > > In order to persist applications of a non-null default, I'm generating > > @NonNull annotations for all affected methods/arguments. > > Care must be taken that these are skipped by the ASTConverter. > > I had actually missed this part earlier. I didn't understand the motivation > behind propogating the defaults to every method and parameter, right from the > package or the type default. > [...] > And then these artificially added annotations also make it to the class file. > Is there any need for this The main motivation is to facilitate working with class files: when we read a class file we don't want to repeat the same searching for defaults and we don't even read any package-info.class, do we? So we want the annotations right at all the places where they are effective with no difference if explicit in source or via a default. > I was thinking of the following approach to propogate nullness info: Read the > default null info and set a bit, such as ExtraCompilerModifiers.AccDefaultNull > on the binding. We already have the tagBits for the annotations, they could do this job. > Then propogate this bit to each method and parameter in the > scope. This will also avoid storing a heavy type binding inside each > SourceTypeBinding. Why do you call this heavy? Is it for the space required for this one reference? The annotation type bindings (max 3) are shared for the entire compilation. I think its a normal tradeoff of storing one reference in every type vs. doing the lookup for the type binding when generating each single method and method argument. If you are concerned about the space required in SourceTypeBinding I can probably hold those references in LookupEnvironment instead and do some efficient lookup based on the tagBit. Is this little space optimization needed?
Created attachment 207592 [details] jar file to show a problem (In reply to comment #165) > The main motivation is to facilitate working with class files: > when we read a class file we don't want to repeat the same searching for > defaults and we don't even read any package-info.class, do we? > So we want the annotations right at all the places where they are effective > with no difference if explicit in source or via a default. The thing is, we already propogate the @Deprecated annotation everywhere, even in .class files, and this is done without explicitly creating a @Deprecated annotation on every type and method. I'm attaching a jar file here which should be used along with the following example (B.java) to see that @Deprecated in pacakge-info.java is sufficient to issue the "type is deprecated" or "method is deprecated" diagnostics. In the same way, we should be able to deal with @NonNull and @Nullable. package p; import binaryNull.ContainingInner; class B { void bar () { // see deprecated warning below (good) ContainingInner.justCall(); ContainingInner in = new ContainingInner(null); ContainingInner.Inner inn = in.new Inner(null); // no warn (bad) } } Another major issue - in the above example, there is no warning at the inner class instantiation statement. I thought about this case on the weekend - this happens because of the synthetic argument created for the inner class constructor because of which MethodInfoWithParameterAnnotations.parameterAnnotations has the annotation corresponding to first parameter at index 1 instead of 0. Therefore org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.scanMethodForNullAnnotation(IBinaryMethod, MethodBinding) is not able to retrieve any annotation. (If this isnt trivial to fix and yet won't require a drastic change in resolution process, you can fix this in M5). I was expecting the inner class case to work fine with sources instead of .class files. But even when I used ContainingInner2.java (below), I could see the same problem public class ContainingInner2 { public static void justCall() { } public ContainingInner2 (@org.eclipse.jdt.annotation.NonNull Object o) { } public class Inner { public Inner (@org.eclipse.jdt.annotation.NonNull Object o) { } } } > Why do you call this heavy? I meant heavy as compared to just storing a bit. > Is this little space optimization needed? No not really. I was more concerned with whether we need the extra space in the first place. Another thing - the reconciler does not report null annotation based errors. i.e. suppose in class B above I use ContainingInner2 in = new ContainingInner2(new Object); and then without CTRL+S, i change it to ContainingInner2 in = new ContainingInner2(null); I don't see any error util i SAVE. While trying to debug I saw that methodBinding.parameterNonNullness array is null for the constructor. Looks like something's there.
(In reply to comment #166) > Another thing - the reconciler does not report null annotation based errors. > i.e. suppose in class B above I use ContainingInner2 in = new > ContainingInner2(new Object); > and then without CTRL+S, i change it to > ContainingInner2 in = new ContainingInner2(null); > > I don't see any error util i SAVE. While trying to debug I saw that > methodBinding.parameterNonNullness array is null for the constructor. Looks > like something's there. This could indicate that the fix for bug 353474 is still incomplete. I'll take a look.
(In reply to comment #166) > Another thing - the reconciler does not report null annotation based errors. Phew, I found a corollary problem: a case where the reconciler reports the error but compiler fails to do so! As a result you see nothing in problems view, but a red cross inside the editor. This happens with the following version of ContainingInner2.java (annotation on inner type) public class ContainingInner2 { public static void justCall() { } public ContainingInner2 (Object o) { } @org.eclipse.jdt.annotation.NonNullByDefault public class Inner { public Inner (Object o) { } } } (call Inner() from B.java from comment 166) PS: With package default annotations, everything works smoothly.
(In reply to comment #168) > Phew, I found a corollary problem: a case where the reconciler reports the > error but compiler fails to do so! As a result you see nothing in problems > view, but a red cross inside the editor. trying it again I can only reproduce this on Project>Clean. Any subsequent changes to B.java and the error vanishes. This brings me to yet another interesting finding. The second case reported in comment 66 (annotation on inner class constructor parameter in .java file) - When I do a clean build, the error on the constructor call appears. Any change and it disappears until the next clean! :\
Created attachment 207604 [details] fix for reconciler issue Ah! Found the reconciler issue. This happened because of nulling out the annotation package names in the lookup environment in org.eclipse.jdt.internal.compiler.lookup.PackageBinding.checkIfNullAnnotationType(ReferenceBinding). So each time the compiler would go and mark these as null, and since the global options never change, the reconciler would look in vain. I think we should do away with this optimization and just play safe... CharOperation.equals() is not a very expensive operation. Stephan, you can now peacefully ignore last part of comment 167 and comment 169. Comment 168 now reads - "I don't see my error". :)
Created attachment 207635 [details] fix for binary ctor of inner class (In reply to comment #166) > [...] > Another major issue - in the above example, there is no warning at the inner > class instantiation statement. I thought about this case on the weekend - this > happens because of the synthetic argument created for the inner class > constructor because of which > MethodInfoWithParameterAnnotations.parameterAnnotations has the annotation > corresponding to first parameter at index 1 instead of 0. Therefore > org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.scanMethodForNullAnnotation(IBinaryMethod, > MethodBinding) is not able to retrieve any annotation. Thanks! Yes, once again we tripped over mixing signatures w/ and w/o synthetic args. Fixed by a little parameter index arithmetics in scanMethodForNullAnnotation. As a helper I introduced a new method in the internal interface IBinaryMethod: getNumParameterAnnotations(), hope this is OK. The patch contains three tests: + compile in one go (i.e., not using binary types): passes w/o the fix + compile separately (i.e., use a binary type): failed w/o the fix + variant using a generic constructor to challenge an alternative flow in BinaryTypeBinding.createMethod: this revealed an omission to propagate parameterNonNullness from an original MethodBinding to its ParameterizedGenericMethodBinding version. Fixed for several subclasses of MethodBinding. I omitted: + ProblemMethodBinding: obviously irrelevant, right? + SyntheticMethodBinding: is probably relevant. TODO + PolymorphicMethodBinding: frankly I couldn't think of a way to get parameter annotations into one of these.
Created attachment 207637 [details] fix for assumed root cause behind reconciler issue (In reply to comment #166) > [...] > Another thing - the reconciler does not report null annotation based errors. > i.e. suppose in class B above I use ContainingInner2 in = new > ContainingInner2(new Object); > and then without CTRL+S, i change it to > ContainingInner2 in = new ContainingInner2(null); > > I don't see any error util i SAVE. While trying to debug I saw that > methodBinding.parameterNonNullness array is null for the constructor. Looks > like something's there. I only accidentally reproduced this by forgetting to enable annotation based null analysis in a new test project. This revealed that along one path (in scanMethodForNullAnnotation()) the check for enablement did not work (it was based on a very old assumption which no longer holds). I.e. the bug is not in the not-reporting by the reconciler but in the *reporting* after Ctrl-S. The new patch avoids any reporting if the analysis is disabled. Could it be, you forgot the same thing as I did? If so, we can avoid your workaround (not nulling-out). I looked at this issue across several uses of one instance of LookupEnvironment, and I think everything is safe here, because reset() wipes out all packages and new packages will freshly be checked for being a null-annotation package (other than through reset() there's no reason a package is re-created after the null-out).
(In reply to comment #168) > (In reply to comment #166) > > Another thing - the reconciler does not report null annotation based errors. > > Phew, I found a corollary problem: a case where the reconciler reports the > error but compiler fails to do so! As a result you see nothing in problems > view, but a red cross inside the editor. I think I saw this once with (part of) the patch from comment 172 disabled. In the debugger I saw a MethodInfo where we need a MethodInfoWithAnnotations which would mean the default applied in Inner didn't have effect on the constructor. Unfortunately, I could never repeat this. Probably depends on a combination of previous patch not yet applied, and a specific state of class files in the workspace. Since I saw it while playing with partially reverted fixes I assume this can no longer occur. Let me know if you still see this with the patch from comment 172. (In reply to comment #169) > trying it again I can only reproduce this on Project>Clean. tried that, but didn't help for reproducing. > [...] This brings me to yet another > interesting finding. The second case reported in comment 66 (annotation on > inner class constructor parameter in .java file) - When I do a clean build, the > error on the constructor call appears. Any change and it disappears until the > next clean! :\ That's probably just another instance of: annotation on binary inner class ctor are not correctly processed, right? If so => fixed in comment 171.
Created attachment 207639 [details] test & implementation v15 New full patch with recent incremental fixes included. From my understanding this resolves all issues raised so far with two exceptions: - should probably handle parameterNonNullness also in SyntheticMethodBinding see comment 171 - want to try copying the approach used for @Deprecated. At a first look the constructions around AccDeprecatedImplicitly and isViewedAsDeprecated indeed look *very* similar to what we need here. I'm pleasantly surprised. So, thanks Ayush, for pushing me to re-think the strategy!
(In reply to comment #174) > Created attachment 207639 [details] > test & implementation v15 Thanks Stephan, this patch solves the issues. Thanks also for adding regression tests! I found another case(local class) where we don't report an error: class B { void bar () { class Local { void callMe(@org.eclipse.jdt.annotation.NonNull Object o){ } } Local l = new Local(); l.callMe(null); // no error } } This however, does not happen if annotation is picked up from a default applied to the local type or the enclosing type
(In reply to comment #175) > I found another case(local class) where we don't report an error: > class B { > void bar () { > > class Local { > void callMe(@org.eclipse.jdt.annotation.NonNull Object o){ > > } > } > Local l = new Local(); > l.callMe(null); // no error > > } > } > > This however, does not happen if annotation is picked up from a default applied > to the local type or the enclosing type "It works on my machine"TM The following test passes in my workspace: public void test_nonnull_parameter_013() { runNegativeTestWithLibs( new String[] { "B.java", "class B {\n" + " void bar () {\n" + " class Local {\n" + " void callMe(@org.eclipse.jdt.annotation.NonNull Object o){\n" + " }\n" + " }\n" + " Local l = new Local();\n" + " l.callMe(null);\n" + " } \n" + "}\n" }, "----------\n" + "1. ERROR in B.java (at line 8)\n" + " l.callMe(null);\n" + " ^^^^\n" + "Type mismatch: required \'@NonNull Object\' but the provided value is null\n" + "----------\n"); } Also the reconciler agrees. What am I doing differently?
> "It works on my machine"TM Strange. WHen I do a clean build it works. But on a usual save it doesn't :(
(In reply to comment #177) > > "It works on my machine"TM > > Strange. WHen I do a clean build it works. But on a usual save it doesn't :( Ok so this seemed similar to what I was experiencing yesterday. So, I used my fix from comment 170 and then everything started working normally. Clean build, reconciler, everyone is happy. So, I still think the comment 170 fix is needed.
Created attachment 207657 [details] alternative strategy for internally encoding nullness defaults OK, following the suggestion from comment 164 I investigated how I could avoid storing TypeBindings to represent nullness induced from a default. I could not directly copy the approach from @Deprecated, because we additionally have the notion of canceling a default. This would make evaluation from binary more tedious and because we actually compute the full propagation during compile I still prefer to explicitly spell out all induced annotations in the bytecode. (good also for JavaModel I think). The patch removes three things: - storing of default and induced nullness using a TypeBinding - creating an AST node for induced nullness - filtering in ASTConverter and replaces it with - an int for storing defaults in SourceTypeBinding and PackageBinding - a bit for storing induced nullness in modifiers (method and argument) - letting ClassFile create the annotation from the modifier bit This change simplifies the code in some places but adds some to ClassFile. At this point I suggest to leave it as a proof of concept that alternative representations are viable, but sticking to the previous strategy is maybe safer for now. I might just keep it for possible future optimization, what do you think? BTW: when adding a bit to ExtraCompilerModifiers we are actually hitting the ceiling of 32 bits to an int. That's also a price to pay. Maybe in this case TagBits suffice (although they're getting scarce, too).
(In reply to comment #178) > (In reply to comment #177) > > > "It works on my machine"TM > > > > Strange. WHen I do a clean build it works. But on a usual save it doesn't :( > > Ok so this seemed similar to what I was experiencing yesterday. So, I used my > fix from comment 170 and then everything started working normally. Clean build, > reconciler, everyone is happy. So, I still think the comment 170 fix is needed. It shouldn't be needed. If this patch is needed it probably indicates another error. OTOH, I tried using a unit test and in a runtime workbench doing various save, close, clean, disable, enable ... but found no way to reproduce. There's nothing I can do at this point. WORKSFORME Which version exactly are you testing? Have you tried starting from a clean master and applying patch v15?
Created attachment 207659 [details] a test project to ratify fix in comment 170 Finally I was able to see why I get this problem and you don't! Phew!! Ok so the attached project shows the problematic scenario (Use p.B class). When a type imports another type from another package, which contains a package default annotation in package-info.java, this problem appears. This happens because the import is resolved, and along with that the annotation is resolved. Immediately, the package names in the global options are nulled. Now, when we come out of the import statement and start resolving the current class B, the package binding for the annotation package is not created again, and global options remain null. So the null annotations in the current type are resolved as plain annotations!
(In reply to comment #181) Btw, I expected only the @NonNullByDefault annotation package to be nulled, but even the @NonNull's package is nulled because the subsequent call to org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.getNullAnnotationBindingFromDefault(long, boolean) via org.eclipse.jdt.internal.compiler.ast.ASTNode.isTypeUseDeprecated(TypeBinding, Scope) tries to resolve @NonNull as well.
Created attachment 207672 [details] test & fix for problem demonstrated in comment 181 (In reply to comment #181) > Created attachment 207659 [details] > a test project to ratify fix in comment 170 > > Finally I was able to see why I get this problem and you don't! Phew!! Lovely, now I see what you are talking about. > [...] So the null annotations in the current type are resolved as plain annotations! To be more specific: this occurs in certain scenarios where the first time round the annotation type is registered as an UnresolvedReferenceBinding. Here our investigations nicely converge: in patch v15 I fixed a similar issue but that helped only for one flow, then while working on comment 179 I came across the same issue from a different path and found a more general solution that also covers your case. The story behind: when we see the UnresolvedTypeBinding of an annotation type we must not null out the field in the environment, because then during resolve of this binding the real resolved binding of the annotation type will not get the proper typeId set. These are actually minor things and your patch from comment 170 surely covers this, but the deeper analysis gave also deeper insight into *why* /sometimes/ nulling out caused problems. The new patch blends all this into a solution that keeps nulling-out (eventually) without breaking the analysis. (contained test_default_nullness_003b is unrelated). BTW: Do you have a comment how to proceed with the findings in comment 179?
(In reply to comment #183) > These are actually minor things and your patch from comment 170 surely > covers this, but the deeper analysis gave also deeper insight into *why* > /sometimes/ nulling out caused problems. The new patch blends all this > into a solution that keeps nulling-out (eventually) without breaking the > analysis. Ok this looks reasonable. Thanks for the fix. Anyhow, I feel more comfortable now without the nulling out of those fields. Anyway CharOperation.equals() isnt very expensive, so why do the optimization at all when it has shown brittle assumptions in the past. Shouldn't we just go without it? > BTW: > Do you have a comment how to proceed with the findings in comment 179? I think we can open another bug for it and fix it in M5. It does change quite a few things and may have its own set of minor issues here and there, which we don't have much time to test now.
Created attachment 207677 [details] test & implementation v16 Consolidated patch. Deepak, please run UI tests with this patch.
Created attachment 207685 [details] fix and test for NPE Found yet another issue. If i put an invalid annotation on the implementation of a method expecting a null contract for the return type, I get an NPE. The problem was during error reporting. Fixed in the above patch. Stephan, can you cross-check if there are more such possible landmines somewhere else too?
(In reply to comment #186) > Created attachment 207685 [details] > fix and test for NPE > > Found yet another issue. If i put an invalid annotation on the implementation > of a method expecting a null contract for the return type, I get an NPE. The > problem was during error reporting. Fixed in the above patch. Stephan, can you > cross-check if there are more such possible landmines somewhere else too? Fix looks good, thanks. The obvious sibling is method findAnnotationSourceStart() (which I should reuse rather than cloning that loop). I'll certainly keep an eye on further NPEs, but wouldn't it be great to have tool support for that? ;-P
I have run the performance tests and there are no regressions.
Another thing - the javadoc for COMPILER_PB_NULL_SPECIFICATION_VIOLATION is not correct. It says option id is org.eclipse.jdt.core.compiler.problem.nullContractViolation. I spent quite some time wondering why this option has no offect even when i set this to ignore in the .prefs file. Found out later that the correct option is .compiler.problem.nullSpecViolation. So this needs to be corrected. Please also review the other options' javadocs to see if everything is ok. Btw, is it possible that if these options are not turned on, we dont transfer the null annotations tag bits to the parameters? Also, I think our null analysis is getting impacted in the wrong way. See for example the following: (switch off the nullSpecViolation to ignore) @org.eclipse.jdt.annotation.NonNull public Object foo(@org.eclipse.jdt.annotation.NonNull Object param) { param = null; // ignored warning(good) Object p = param; if (p == null) { // says p cannot be null (bad) } return null; // ignored warning (good) } There's a bogus warning on p== null. Other cases like these would also be affected.
(In reply to comment #185) > Created attachment 207677 [details] [diff] > test & implementation v16 > > Consolidated patch. > Deepak, please run UI tests with this patch. All UI tests pass with the above patch.
(In reply to comment #189) > Another thing - the javadoc for COMPILER_PB_NULL_SPECIFICATION_VIOLATION is not > correct. It says option id is > org.eclipse.jdt.core.compiler.problem.nullContractViolation. sorry that's a relict from the old days.. > I spent quite some > time wondering why this option has no offect even when i set this to ignore in > the .prefs file. Found out later that the correct option is > .compiler.problem.nullSpecViolation. > So this needs to be corrected. Please also review the other options' javadocs > to see if everything is ok. will do so > Btw, is it possible that if these options are not turned on, we dont transfer > the null annotations tag bits to the parameters? I think it's fine to just use the master switch to decide how much resolving etc. we do. > Also, I think our null analysis is getting impacted in the wrong way. See for > example the following: (switch off the nullSpecViolation to ignore) > > @org.eclipse.jdt.annotation.NonNull public Object > foo(@org.eclipse.jdt.annotation.NonNull Object param) { > > param = null; // ignored warning(good) > Object p = param; > if (p == null) { // says p cannot be null (bad) > > } > return null; // ignored warning (good) > } > > There's a bogus warning on p== null. Other cases like these would also be > affected. The warning you criticize is consistent with the specification: if param is @NonNull then the warning is valid. Conceptually I'd say, the bug is in setting nullSpecViolation to ignore. Enabling the analysis and saying the diagnostics it produces shall be ignored sounds like nonsense to me. Would you see problems if we change nullSpecViolation to only allow warning/error? To me that seems to be the cleanest solution to the issue.
(In reply to comment #191) > The warning you criticize is consistent with the specification: > if param is @NonNull then the warning is valid. > > Conceptually I'd say, the bug is in setting nullSpecViolation to ignore. > Enabling the analysis and saying the diagnostics it produces shall > be ignored sounds like nonsense to me. It may happen in some case. Eg: if there's some bug in the null annotation analysis and I get an annoying compile error, I may decide to change the option to warning/ignore till the bug is fixed. But then even though the compiler allows an assignment to null, it continues treating the variable as non null and this will result in false positives. So, I will have to turn off annotation based analysis completely. Atleast when the option is configures as 'warning' and the compiler allows assignment to null with just a warning, I should not get side-effects at other places as if that warning was an error and the assignment never took place. I think in org.eclipse.jdt.internal.compiler.ast.Statement.checkAssignmentAgainstNullAnnotation(BlockScope, FlowContext, LocalVariableBinding, int, Expression), the line nullStatus=FlowInfo.NON_NULL; should be made conditional on whether the nullityMismatch(..) reported an error or not. Or even better, why enforce setting to non null in any case? Let the assignment have its natural effec, whatever be the configuration. If there's an error the user will have to correct it and the variable will be restored to some non null value. Though not sure whether other places would need some change too.
(In reply to comment #192) > (In reply to comment #191) > > Conceptually I'd say, the bug is in setting nullSpecViolation to ignore. > > Enabling the analysis and saying the diagnostics it produces shall > > be ignored sounds like nonsense to me. > It may happen in some case. Eg: if there's some bug in the null annotation > analysis and I get an annoying compile error, I may decide to change the option > to warning/ignore till the bug is fixed. But then even though the compiler > allows an assignment to null, it continues treating the variable as non null > and this will result in false positives. So, I will have to turn off annotation > based analysis completely. Primary option if analysis should be buggy: suppress that specific warning, rather than ignoring all problems of that kind. > Atleast when the option is configures as 'warning' and the compiler allows > assignment to null with just a warning, I should not get side-effects at other > places as if that warning was an error and the assignment never took place. Isn't it a matter of what takes precedence? Which is the primary error and what do we consider as secondary (reporting of which we want to avoid)? My point: *spec trumps accidental assignments*. - specifying a variable as @NonNull applies to all its uses - assigning null to such a variable is exactly one error Consider this silly example: void computeLength(@NonNull String s, int factor) { s = null; if (factor == 0) return s.length(); if (factor == 1) return s.length(); if (factor == 2) return s.length() * 2; if (factor == 3) return s.length() * 3; } How many errors? I'd say one. You seem to want five or four, right? Do we really want to tweak the implementation to report more errors here? Obviously my version can only be enforced if "s = null" is definitely flagged, hence my suggestion to disallow "ignore" for this.
Re allowing nullSpecViolation=ignore: This only makes sense if we have scenarios where annotation.nullanalysis=enabled still produces useful problems. I think the example in comment 189 shows that it's a bad idea to perform the annotation-based null analysis but not to show violations where they occur. So we could either disallow nullSpecViolation=ignore or say that this has the same effects as setting annotation.nullanalysis=disabled (i.e. effectively disable all analyses). I prefer the former. Leaving 'ignore' away is also no problem for the UI.
Markus' comment settles the debate. +1 for the fix when final patch has the following: - my fix for NPE - the JavaCore doc fix - the NullAnnotationModelTest fix (use ByteArrayInputStream) - in both NullAnnotationTest and NullAnnotationModelTest , the bundle name needs to be changed to "o.e.j.core.annotation" (remove 'null') - disallow 'ignore' setting for nullSpecViolation. (You may need to update the doc, remaster tests if applicable, etc.) - run all tests once before releasing.
Released for 3.8 M4 in commit http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?id=305123b230bcfd1f733969b7cd2c687b75857ff0 includes fixes also for - Bug 334457 - Bug 331647 - Bug 363858 Thanks to everybody who participated in the discussion to shape this solution, special thanks to Ayush for in-depth reviews! Stay tuned for further enhancements in M5.
(In reply to comment #196) > Thanks to everybody who participated in the discussion to shape this > solution, special thanks to Ayush for in-depth reviews! Thanks Stephan and Ayush. Can we have a smoke test done the first nightly build that includes this feature and the new bundle ? Please post status here, TIA.
(In reply to comment #197) Just checking - was this part done? >+ SyntheticMethodBinding: is probably relevant. TODO And is there a bug for this: >Created attachment 207657 [details] >alternative strategy for internally encoding nullness defaults
(In reply to comment #197) > Thanks Stephan and Ayush. Can we have a smoke test done the first nightly > build that includes this feature and the new bundle ? Please post status > here, TIA. Using build N20111130-2000: - demo for null annotations works - compiling JDT/Core works junit tests are still running on the server.
Stephan, what are the changes in Scanner.java for ? Can you explain why that is needed ? Even without those changes all tests pass. BTW, changes to Scanner.java almost always need to be mirrored into PublicScanner.java and in this case, I see only one file changed.
(In reply to comment #200) > Stephan, what are the changes in Scanner.java for ? Can you explain why that > is needed ? Even without those changes all tests pass. BTW, changes to > Scanner.java almost always need to be mirrored into PublicScanner.java and > in this case, I see only one file changed. As this change, even if only a minor one, constitutes a change deep in the entrails, we should justify this with a regression test. Otherwise, if this change is not integral to this feature we should consider backing out this particular change.
(In reply to comment #201) > As this change, even if only a minor one, constitutes a change deep in the > entrails, we should justify this with a regression test. Otherwise, if this > change is not integral to this feature we should consider backing out this > particular change. I have raised bug 365387 to log follow up issues and have moved this one there.
(In reply to comment #199) > (In reply to comment #197) > > Thanks Stephan and Ayush. Can we have a smoke test done the first nightly > > build that includes this feature and the new bundle ? Please post status > > here, TIA. > > Using build N20111130-2000: > - demo for null annotations works > - compiling JDT/Core works > junit tests are still running on the server. These tests finished clean for JDT/Core. Jay also worked with Kim to verify that JDT compiler post this checkin can build the entire eclipse SDK. Thanks Kim ! Thanks Jay.
(In reply to comment #179) > At this point I suggest to leave it as a proof of concept that alternative > representations are viable, but sticking to the previous strategy is maybe > safer for now. > I might just keep it for possible future optimization, what do you think? Agreed. We need to satisfy ourselves that it is worth it - also watching out for how the discussion in bug 366063 evolves.
Verified for 3.8M5 using build I20120122-2000.