Bug 129581 - [compiler][null][recode] redundant check involving double protection
Summary: [compiler][null][recode] redundant check involving double protection
Status: VERIFIED FIXED
Alias: None
Product: JDT
Classification: Eclipse Project
Component: Core (show other bugs)
Version: 3.2   Edit
Hardware: PC All
: P5 normal (vote)
Target Milestone: 3.6 M5   Edit
Assignee: Ayushman Jain CLA
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2006-02-27 10:21 EST by Maxime Daniel CLA
Modified: 2010-01-25 14:18 EST (History)
4 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Maxime Daniel CLA 2006-02-27 10:21:04 EST
This is a fup of bug 128014. While the fix for 128014 is relatively easy, the following test case is not covered and would involve more radical changes:
public class X {
  void foo(Object o) {
    if (o != null) {
      if (o != null) {
        o.toString();
      }
      o.toString(); // should not complain here
    }
  }
}
Added NullReferenceTest test #335.
Comment 1 Maxime Daniel CLA 2006-03-23 09:52:39 EST
This one and and a few others suggest that conditions be applied a three way merge (upstream, whenTrue, whenFalse) instead of a two way merge (whenTrue, whenFalse), at least as far as null analysis is concerned.
In the considered case above, the protection of the first if should not be damaged by the (empty) else of the second if. Yet there is no way to carry that information with the flowInfo, because the former is recursive while the latter is flat (no stack).
Moreover, when coming out of the second if, we must make a difference between tainted null (the variable has been tested against null, hence there are doubts about its null status) and potential null (at least one possible path assigns null to it). This is needed because the former would not clear the outmost protection, while the latter would. This would in turn call for an encoding change.
Comment 2 Maxime Daniel CLA 2006-04-25 00:46:26 EDT
Post 3.2.
Comment 3 Maxime Daniel CLA 2007-06-04 02:10:43 EDT
See also bug 190623.
Comment 4 Patrice Chalin CLA 2007-06-04 10:51:01 EDT
(In reply to comment #0)
> void foo(Object o) {
>   if (o != null) {
>     if (o != null) {
>       o.toString();
>     }
>     o.toString(); // should not complain here
>   }
> }

Consider foo2() which is like foo() except that the first statement assigns a non-null value to o.

  void foo2(Object o) {
    o = new Object();
    if (o != null) {
      if (o != null) {
        o.toString();
      }
      o.toString(); // should not and DOES NOT complain here
    }
  }

As a result, the complain goes away.

This difference in behavior is a result of the distinction being made in the JDT between variables being non-null (or null) because of an assignment vs. being non-null (or null) following a comparison.

Don't we want the compiler to complain about the second dereference in foo() if and only if it complains about it in foo2()? I.e. don't we want the behavior to be the same regardless of the manner in which we determine that o is non-null (or null)? If not, why would we want it to be different?
Comment 5 Maxime Daniel CLA 2007-06-05 02:00:55 EDT
The truth is: we don't want, we happen to.

The design of null references analysis in JDT is constrained by (at least) the two following items:
- we do not want to iterate more than once on any piece of code; the rationale for this is to get null analysis as a side effect of the standard analyseCode walk on the AST, at as low a cost as possible; this demands that nuances be found to characterize what can happen at a given variable, upstream, downstream, and 'updownstream' if you will when a piece of code gets analyze before code that precedes it in the file, or when loops strike;
- we do limit the number of bits we use to code those nuances; this one is a bit arbitrary, but gets some legitimacy from the following truism: the less bits you need for each atom of information, the less total memory you'll consume; the number of bits we currently use is already not enough to cope with the situations reported as [recode] in Bugzilla; I won't change anything to them without the green light from Philippe...
The combination of those two constraints leads us to fine-tune behaviors for the sake of performance and to the detriment of other desirable attributes, like consistency. In the present case, what we sacrifice is the consistency of sequel messages, which is acceptable to a certain extent IMHO.

Having said that, assignment is stronger than protection in the current state of our implementation. I could not tell you which from memory, but a few test cases about first level errors (as opposed to sequel errors) would break if we did otherwise (that's what motivated the introduction of the protection scenario in the first place). This is in part due to the fact that, whereas protection tells something certain to its enclosed scope, it is also our mechanism to tell the outside about the variable being tainted. As opposed to it, assignment stands valid for its scope (more to say when the source of the assignment has an unknown null status).

Last but not least, thanks for your comments. Since consistency must always be considered as a high priority goal, I'll watch it if/when we rework this item, including for sequel errors.
Comment 6 Maxime Daniel CLA 2007-06-19 08:06:35 EDT
Reopening as P5 (since RESOLVED LATER is deprecated).
Comment 7 Ayushman Jain CLA 2010-01-22 01:52:17 EST
Patch provided in bug 293917.
Activated regression test NullReferenceTest#test0335_if_else().
Comment 8 Olivier Thomann CLA 2010-01-25 14:18:16 EST
Verified for 3.6M5 using I20100125-0800