Bug 400874 - [1.8][compiler] Inference infrastructure should evolve to meet JLS8 18.x (Part G of JSR335 spec)
Summary: [1.8][compiler] Inference infrastructure should evolve to meet JLS8 18.x (Par...
Status: RESOLVED FIXED
Alias: None
Product: JDT
Classification: Eclipse Project
Component: Core (show other bugs)
Version: 4.3   Edit
Hardware: PC Windows 7
: P3 normal (vote)
Target Milestone: BETA J8   Edit
Assignee: Stephan Herrmann CLA
QA Contact:
URL:
Whiteboard:
Keywords:
: 401783 414631 423070 423839 424038 (view as bug list)
Depends on: 400982 401170 403847 404042 418807 419048 423070
Blocks: 380188
  Show dependency tree
 
Reported: 2013-02-14 22:46 EST by Srikanth Sankaran CLA
Modified: 2013-12-14 15:24 EST (History)
8 users (show)

See Also:
srikanth_sankaran: review? (srikanth_sankaran)


Attachments
Patch that may have some useful APIs (53.63 KB, patch)
2013-11-30 22:12 EST, Srikanth Sankaran CLA
no flags Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Srikanth Sankaran CLA 2013-02-14 22:46:35 EST
BETA_JAVA8:

A number of changes are being made to type inference mechanism to
fix specification bugs, clarify issues, to address poly expressions
etc.

These are captured in section 18 of JLS8 - part G of JSR335 specification.
ECJ should evolve to cope with these changes.
Comment 1 Srikanth Sankaran CLA 2013-02-14 22:48:26 EST
Stephan, thanks for leading this effort - I am available to consult/discuss,
code review, or implement fixes for broken down tasks.
Comment 2 Stephan Herrmann CLA 2013-02-15 04:52:36 EST
ack
Comment 3 Stephan Herrmann CLA 2013-02-16 18:20:01 EST
Here's a first assessment after spending some time in and around the sub task bug 400982.

== CODE CHANGES: ==
This bug will require some heavy refactoring of our code, e.g.:

- during resolve we need to know whether any expression is a poly expression, which in turn requires knowledge about the context.
  -> most obvious solution would be to replicate the 'valueRequired' argument (from generateCode) into all 'resolveType' methods throughout the AST.
  Time ago we had a similar request for analyseCode, but that refactoring has been voted down due to the risk involved.

- resolving of a MessageSend needs to propagate knowledge about argument expressions into locations like Scope.findMethod and PGMB.computeCompatibleMethod, which are AST agnostic.
  -> easiest solution could be to extend the interface InvocationSite, which is a significant but manageable change.

With all the lifting of constraints from inner contexts to outer contexts, it's not even clear if a single tree-traversal is a good strategy for resolveType().
The informal discussion mentions two "rounds" of inference for finding the type of a method invocation.

== STATE OF THE SPEC: ==
While some parts of the spec make for a good read, I'm worried about the incompleteness of the spec:
The main algorithm consists of these phases:
- building initial constraints and bounds
- reduction
- incorporation
- resolution
Of these four phases, less than two are actually specified as of 0.6.1. The two essential phases "incorporation" and "resolution" are sketched by mere place holders.
From my pov the spec of this algorithm is nowhere near the state to be implemented.
E.g., termination of the algorithm seems to be controlled by the notion of an inference variable being "resolved", which is given by a type equality bound α = T.
I could find no location in the spec, however, that would actually reduce any given constraints to the required type equality bound.

== CONCLUSION: ==
To control the risks of the required refactorings, we wouldn't want to waste a single day before implementing, in order to keep a large margin for testing and fixing.
OTOH, I'm very reluctant to spend much time in implementing an utterly unfinished spec.

I wish I had better news, but I don't.
Comment 4 Srikanth Sankaran CLA 2013-02-16 20:14:32 EST
(In reply to comment #3)
> Here's a first assessment after spending some time in and around the sub
> task bug 400982.

[...]

> To control the risks of the required refactorings, we wouldn't want to waste
> a single day before implementing, in order to keep a large margin for
> testing and fixing.
> OTOH, I'm very reluctant to spend much time in implementing an utterly
> unfinished spec.
> 
> I wish I had better news, but I don't.

Stephan, could you eliminate the implementation concerns and share with
a note just about spec state and I'll forward it to the EG.
Comment 5 Srikanth Sankaran CLA 2013-02-16 23:45:05 EST
(In reply to comment #3)
> Here's a first assessment after spending some time in and around the sub
> task bug 400982.
> 
> == CODE CHANGES: ==
> This bug will require some heavy refactoring of our code, e.g.:
> 
> - during resolve we need to know whether any expression is a poly
> expression, which in turn requires knowledge about the context.

For casting context, I introduced Expression.isPolyExpressionInCastingContext().
Don't know is such a simplistic solution is enough for all other contexs.

> - resolving of a MessageSend needs to propagate knowledge about argument
> expressions into locations like Scope.findMethod and
> PGMB.computeCompatibleMethod, which are AST agnostic.
>   -> easiest solution could be to extend the interface InvocationSite, which
> is a significant but manageable change.

In my early stage prototype for overload resolution, I had a solution
where for argument expressions that are polyexpressions in invocation context,
instead of calling resolveType, I invoke partialResolveType, which returns
a subtype object of ReferenceBinding which stores the ASTnode type.

This should avoid having to tunnel the ASTnodes to various places, but again, I 
don't know if that will lend well to other situations.

Before, any major refactoring/redesign: if you could post a short snippet
that shows the motivation/need along with a pointer to the relevant portion
of the spec, it can get the benefit of another pair of eyes.
Comment 6 Stephan Herrmann CLA 2013-02-17 06:18:38 EST
(In reply to comment #5)
> For casting context, I introduced
> Expression.isPolyExpressionInCastingContext().
> Don't know is such a simplistic solution is enough for all other contexs.

I'm afraid that won't cover this part of the spec:
  "Generic method invocation expressions, along with class instance creation expressions that use a diamond <>, are poly expressions when they appear in assignment or invocation contexts. This allows type argument inference to depend on context."

Maybe we could use ASTNode.InsideExpressionStatement to discriminate?
 
> In my early stage prototype for overload resolution, I had a solution
> where for argument expressions that are polyexpressions in invocation
> context,
> instead of calling resolveType, I invoke partialResolveType, which returns
> a subtype object of ReferenceBinding which stores the ASTnode type.

Do you have a draft of this that I could look at?

> Before, any major refactoring/redesign: if you could post a short snippet
> that shows the motivation/need along with a pointer to the relevant portion
> of the spec, it can get the benefit of another pair of eyes.

In attachment 227161 [details] (from bug 400982) you can see a sketch of a possible
addition to InvocationSite, requesting that each invocation site can answer
an InferenceContext18, which collects all constraints and bounds for that
particular context.

Maybe this together with the use of InsideExpressionStatement will already
get us going. 

Another option would be to revisit all locations calling 
Expression.setExpectedType and use these for a dual purpose:
- mark the target expression as needing a value (candidate for poly expr)
- pass down the inference context instead of just a fixed type

BTW: looking at setExpectedType I'm reminded of return statements which I
don't see mentioned in the spec (when it speaks of "assignment or invocation
contexts"). One informal discussion alludes to the inclusion of "return 
statements, etc." (intro of sect 5) but this is missing from sect 5.2.
Comment 7 Srikanth Sankaran CLA 2013-02-19 15:56:05 EST
(In reply to comment #3)
> Here's a first assessment after spending some time in and around the sub
> task bug 400982.
> 
> == CODE CHANGES: ==
> This bug will require some heavy refactoring of our code, e.g.:
> 
> - during resolve we need to know whether any expression is a poly
> expression, which in turn requires knowledge about the context.
>   -> most obvious solution would be to replicate the 'valueRequired'
> argument (from generateCode) into all 'resolveType' methods throughout the
> AST.

For https://bugs.eclipse.org/bugs/show_bug.cgi?id=399778, I released
http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=BETA_JAVA8&id=677bf3763b6e383c756f5ca8cbbd80c6dec30c1f which introduces
the abstraction ExpressionContext and the propagation of it.

See Expression.setExpressionContext and Expression.isPolyExpression

(the comment in MessageSend#isPolyExpression about grabbing the expected
type and not touching it, is misplaced as there is nothing to grab).

> - resolving of a MessageSend needs to propagate knowledge about argument
> expressions into locations like Scope.findMethod and
> PGMB.computeCompatibleMethod, which are AST agnostic.
>   -> easiest solution could be to extend the interface InvocationSite, which
> is a significant but manageable change.

My plan to tackle this in bug 400871 is through a new type of binding
PolyTypeBinding that is a subtype of ReferenceBinding that would be
returned by poly expressions (i.e lambda, reference expressions and 
messagesend) whenever resolveType is invoked on them in invocation context 
and there is no target type - this binding would capture the resolution 
scope and the AST node.

Anytime PolyTypeBinding#isCompatibleWith is called, we have the functional
interface type against which compatibility is to be determined, we have the
AST and the resolution scope. 

> With all the lifting of constraints from inner contexts to outer contexts,
> it's not even clear if a single tree-traversal is a good strategy for
> resolveType().

I have some ideas here. Via a ASTVisitor, traverse the lambda and
gather just the declarations and return statements and build a lambda'
that can be analyzed over and over if required: the bit about 
"A block lambda body is value-compatible if it cannot complete normally
and ..." is worrisome and needs to be thought through though.

Part B discussion box says: "It's worth noting that the void/value compatible definition is not a strictly structural property: "can complete normally" depends on the values of constant expressions, and these may include names that reference constant variables." - This makes it a lot more challenging.

But then this begs the question: why is not strictly a structural property ?
How can control flow affect type inference/compatibility whatever ?
This looks like over-specification to me.

Did you find something that would show why/how this improves/facilitates
overload resolution ? I can't think of any reasons just now.
Comment 8 Srikanth Sankaran CLA 2013-02-24 08:20:39 EST
Running through the list of concerns from comment#3, 

> Here's a first assessment after spending some time in and around the sub
> task bug 400982.
> 
> == CODE CHANGES: ==
> This bug will require some heavy refactoring of our code, e.g.:
> 
> - during resolve we need to know whether any expression is a poly
> expression, which in turn requires knowledge about the context.

I believe this is taken care of. 

> - resolving of a MessageSend needs to propagate knowledge about argument
> expressions into locations like Scope.findMethod and
> PGMB.computeCompatibleMethod, which are AST agnostic.
>   -> easiest solution could be to extend the interface InvocationSite, which
> is a significant but manageable change.

Ditto. PolyTypeBinding captures the PolyExpression. This binding can be
passed around as a TypeBinding. Most existing code shouldn't even have
to know of a new TypeBinding kind - isCompatibleWith() is overridden, so
for the most part things should automagically work.

> With all the lifting of constraints from inner contexts to outer contexts,
> it's not even clear if a single tree-traversal is a good strategy for
> resolveType().
> The informal discussion mentions two "rounds" of inference for finding the
> type of a method invocation.

What we need to do for part F's requirements is to gather "result expressions"
expressed as a function of concrete types and synthesized type variables for
elided types. Then for most specific analysis, we can "instantiate" these
type variables with types from the candidate methods. So there is no real
need for multiple tree traversal. The multiple "traversals" are over result
expressions which are computed and lifted out of the tree. See LambdaExpression.discoverShape in https://bugs.eclipse.org/bugs/show_bug.cgi?id=401610#c1 for early nuggets of this proposed solution.

Not sure yet, if this will suffice the requirements for type inference.

Overall, the picture I would say is much clearer and brighter.
Comment 9 Stephan Herrmann CLA 2013-02-24 09:30:31 EST
I'll happily address all technical difficulties once I have a somewhat
complete version of part G. Right now I'd say it's too early to judge
how one solution or the other will help for the future type inference.

Please let me know if any of this is blocking the efforts in bug 400871,
otherwise I'll just wait, OK?
Comment 10 Srikanth Sankaran CLA 2013-02-24 12:45:46 EST
(In reply to comment #9)

> Please let me know if any of this is blocking the efforts in bug 400871,
> otherwise I'll just wait, OK?

Well, many of the sections in part F start with something like "if the
candidate method is generic, then apply 18.x.y" - we can address these
as part of type inference work rather than as part of Chapter F - I am
fine with that.

There are two possible actions that we can take as we wait - (a) see if
there are any infrastructure elements we can put in place even as we wait
- This may or may not be possible based on how unfinished the chapter is
(b) start putting together a set of disabled tests that we think would
offer good coverage for the new piece once it is ready.

I expect to finish Part F within the next 10 days. Before I switch
gears to code generation, I'll study part G so we will be able to
discuss more meaningfully than I am able to do now.
Comment 11 Stephan Herrmann CLA 2013-02-24 14:56:15 EST
(In reply to comment #10)
> There are two possible actions that we can take as we wait - (a) see if
> there are any infrastructure elements we can put in place even as we wait
> - This may or may not be possible based on how unfinished the chapter is

that's what I started in bug 400982, until I realized that I was beginning
to invent things that have not yet been specified, just in order to allow me
to actually run some of the new code. That's when I stopped work in that bug.

> (b) start putting together a set of disabled tests that we think would
> offer good coverage for the new piece once it is ready.

The better part of sect. 18 makes no statement of what programs will be 
accepted or rejected by the new inference. At that level we would be
creating pure white box tests, that check, e.g., for the proper creation
of inference variables, constraint formulas etc. I don't think we want/need
that kind of tests.

OTOH, a number of existing tests will in fact trigger the new inference.

I'm thinking of ensuring a first level of test coverage with the help of 
JaCoCo, with 100% branch coverage of relevant code (excepting, e.g., debug
output) as the initial quality gate.
Comment 12 Srikanth Sankaran CLA 2013-02-25 06:20:46 EST
(In reply to comment #8)

> > With all the lifting of constraints from inner contexts to outer contexts,
> > it's not even clear if a single tree-traversal is a good strategy for
> > resolveType().
> > The informal discussion mentions two "rounds" of inference for finding the
> > type of a method invocation.

> What we need to do for part F's requirements is to gather "result
> expressions"
> expressed as a function of concrete types and synthesized type variables for
> elided types. Then for most specific analysis, we can "instantiate" these
> type variables with types from the candidate methods. So there is no real
> need for multiple tree traversal. The multiple "traversals" are over result
> expressions which are computed and lifted out of the tree. See

I am a bind now - A lambda expression is compatible with target type if
among other things every return expression's type is assignment compatible
with the descriptor's return type.

This requires resolving arguments (because they can be referenced in
return expressions), local declarations (because ... same reason), 
processing (not necessarily resolving) blocks (because they introduce
scopes and the meaning of names used in return expressions could change 
with scopes.) and resolving return expressions (because we need their 
type to establish assignment compatibility)

I prototyped a solution based on ASTVisitor that would minimally process
the lambda nodes and extract result expressions. It works beautifully,
but then such selective ahead of time resolution leaves the concerned
AST nodes "dirty" and leads to sporadic misbehavior at a later time.

Either we need to view this problem of dirty bits in the AST nodes as a
series of bugs and fix them (not at all a happy picture) or 

(a)Come up with a ASTReplicator class that can clone a subtree. Looking at
how many node types would be involved (lambda can return a lambda and
lambda can have a block body pulling in really all node types in the worst
case) - Selective replication is possible, but satisfying ourselves of
correctness is going to be a nightmare.

(b) check out how the shallow copy model of Object.clone would work in
this scenario. Not very promising.
Comment 13 Jesper Moller CLA 2013-02-25 07:13:41 EST
(In reply to comment #12)

> Either we need to view this problem of dirty bits in the AST nodes as a
> series of bugs and fix them (not at all a happy picture) or 
> 
> (a)Come up with a ASTReplicator class that can clone a subtree. Looking at
> how many node types would be involved (lambda can return a lambda and
> lambda can have a block body pulling in really all node types in the worst
> case) - Selective replication is possible, but satisfying ourselves of
> correctness is going to be a nightmare.
> 
> (b) check out how the shallow copy model of Object.clone would work in
> this scenario. Not very promising.

or (c) introduce a ASTNode.resetResolve() which handles resetting all the bits set by the individual resolve. Wouldn't that be less work than cloning?
Comment 14 Srikanth Sankaran CLA 2013-02-25 08:37:02 EST
(In reply to comment #13)

> or (c) introduce a ASTNode.resetResolve() which handles resetting all the
> bits set by the individual resolve. Wouldn't that be less work than cloning?

I did consider that option, ("unresolve") - Cloning is mindless boilerplate 
code, figuring out actually which bits in the AST got dirty is tough work. 
The bits may not be modified directly by resolve and may be modified by some
method called by it. Alternately, knowing what is the initial state of each
of the AST nodes is also complex as return statements could be arbitrarily
complex i.e 

    return () -> {
                   // block with locals, various control flow statements,
                   // local classes, try-catch-finally blocks, more lambda etc.
                 }

At the moment, I am leaning towards a ASTReplicator facility.
Comment 15 Srikanth Sankaran CLA 2013-02-25 09:13:30 EST
(In reply to comment #14)

>     return () -> {
>                    // block with locals, various control flow statements,
>                    // local classes, try-catch-finally blocks, more lambda
> etc.
>                  }

(Not that we expect such code in reality, but compilers must be prepared
for any possible input)
Comment 16 Stephan Herrmann CLA 2013-02-25 09:47:50 EST
Are you discussing details of Part G or Part F here?
Comment 17 Srikanth Sankaran CLA 2013-02-25 12:26:29 EST
Here is another idea: We could serialize the LambdaExpression node
by calling toString on it and then reparse the string into LambdaExpression' :)

This solution can also be mechanically verified by running it through our
entire test suite and making sure that Serialized(LambdaExpression) == Serialized(LambdaExpression') (ignoring white space differences). 

Yes Stephan, we are discussing part F here, your observation in comment#3
started it :) Further discussion in the right bug.
Comment 18 Srikanth Sankaran CLA 2013-02-25 12:39:40 EST
(In reply to comment #17)
> Here is another idea: We could serialize the LambdaExpression node
> by calling toString on it and then reparse the string into LambdaExpression'
> :)

That is dumb, we don't even have to serialize it. If we know the start and
end positions of the lambda expression, we can simply reset the scanner to
those points and reparse.

> Yes Stephan, we are discussing part F here, your observation in comment#3
> started it :) Further discussion in the right bug.

Further discussions really in the other place :)
Comment 19 Stephan Herrmann CLA 2013-02-25 18:29:41 EST
While wondering how many modes of resolving we'll need, I anticipate these
requirements from Part G:

A) standalone expression can basically be resolved in old-style.

B) poly expressions are resolved using the new type inference, which applies
   a reduced form of constraint solving

(B) inherently breaks the single-traversal rule of old-style resolving.
This approach rather works on sets of constraints and bounds over so called
inference variables. I envision that this will be integrated with the
existing code somewhat along these lines:
- an initial tree traversal (e.g., "createInitialBoundSet()") creates a re-
  presentation using inference variables, constraint formulas and type bounds
- actual inference will work on only these variables, formulas and bounds
  in order to compute valid instantiations for the inference variables.
- assuming that inference variables hold back-pointers into the AST,
  whenever an inference variable is resolved this result can be pushed into
  the AST.
- when inference fails, all unresolved inference variables should push a
  problem binding into their AST nodes.
This enables us to attach type bindings in arbitrary order, no further
tree traversal needed.

By performing all the interim work on inference variables, not the AST,
this part doesn't seem to require any un-resolving, as discussed above.

A tricky part is in the switch between strategies (A) and (B), since the
detection of a poly expression needs some resolved type information,
which we don't have before we start resolving using either strategy (A)
or (B) ...

Let's assume we start in strategy (A) until we find s.t. that from a
syntactic pov could be a poly expression.
Here we could throw an exception and restart at the enclosing syntactic
context using strategy (B) (alternatively, we could already collect that
information as early as during parsing and let it bubble out somehow).
- Now, while initializing type inference we'd find some already resolved
  AST nodes.
  -> need to check if any of this could be wrong, but for those parts that
     are not poly we might be allowed to keep the resolved type information.
  -> if so, simply convert resolved types into (resolved) inference
     variables, to participate in the inference game.
- At the end of inference we might have to check if inference was actually
  applicable, by re-checking polyness.
  -> need to check if using inference where it wasn't called for could
     cause any harm
     -> inference is soundness preserving, thus any solution found by
        inference is sound
     -> inference is not completeness preserving: not having found a solution
        by inference doesn't mean there isn't one.
     Is it possible for an actually-not-poly expression (that looked like
     one in syntactic terms) that either (A) or (B) finds a solution which
     the other strategy doesn't?


This is drawn mostly from my current understanding of just section 18.
I don't have a clear picture whether any of this could help for overload
resolution using the concept of "potentially compatible".

As soon as I have a more complete specification I will continue work in
bug 400982 which will allow you to investigate, whether overload resolution
can piggy-back on that infra structure. I hope, I soon have that material.
Comment 20 Srikanth Sankaran CLA 2013-02-25 19:10:47 EST
(In reply to comment #19)

> Let's assume we start in strategy (A) until we find s.t. that from a
> syntactic pov could be a poly expression.
> Here we could throw an exception and restart at the enclosing syntactic
> context using strategy (B) (alternatively, we could already collect that
> information as early as during parsing and let it bubble out somehow).

This parenthesized piece is already in place. Only class instance creation 
expressions, lambdas and reference expressions can be detected to be poly 
expressions apriori. For method invocation, we can categorize them as 
polyexpressions only after inference has proceeded to a near completion 
stage and we still have unresolved type variables and need the expected
type. The very fact of the expected type being called into question and
being used to successfully complete inference is what makes a method 
invocation a poly expression.  

> - At the end of inference we might have to check if inference was actually
>   applicable, by re-checking polyness.
>   -> need to check if using inference where it wasn't called for could
>      cause any harm

In the present JLS7 model, I believe the trees would not be touched if
inference did not succeed.
Comment 21 Stephan Herrmann CLA 2013-02-26 04:33:14 EST
(In reply to comment #20)
> (In reply to comment #19)
> 
> > Let's assume we start in strategy (A) until we find s.t. that from a
> > syntactic pov could be a poly expression.
> > Here we could throw an exception and restart at the enclosing syntactic
> > context using strategy (B) (alternatively, we could already collect that
> > information as early as during parsing and let it bubble out somehow).
> 
> This parenthesized piece is already in place. Only class instance creation 
> expressions, lambdas and reference expressions can be detected to be poly 
> expressions apriori.

We both have implemented some detection of *isPolyExpression*.
For part G we might need to know, whether an expression, a statement,
*contains* a poly expression somewhere inside. That's why I mentioned
possible "bubbling out" of this information.

> For method invocation, we can categorize them as 
> polyexpressions only after inference has proceeded to a near completion ...

That's why I made the distinction "from a syntactic pov", so we might
defer the final decision on polyness till later.

> >   -> need to check if using inference where it wasn't called for could
> >      cause any harm
> 
> In the present JLS7 model, I believe the trees would not be touched if
> inference did not succeed.

My idea was to actually use the information from inference, although
after the fact we know that no inference would have been necessary.
Sure, if that would create wrong results we can still chose to simply
discard the inference result and retry with old-style resolving.
Comment 22 Srikanth Sankaran CLA 2013-02-26 04:41:37 EST
(In reply to comment #21)

> We both have implemented some detection of *isPolyExpression*.
> For part G we might need to know, whether an expression, a statement,
> *contains* a poly expression somewhere inside. That's why I mentioned
> possible "bubbling out" of this information.

Perhaps an example would help me see your point better, to see why the
present scheme will fall short in some situations and/or how it can be 
enhanced to meet the slightly different needs.
Comment 23 Srikanth Sankaran CLA 2013-02-26 05:20:29 EST
Not having understood Part G properly yet, perhaps I am not contributing to
a meaningful dialog from my end: But thinking about it from part F pov,
a lambda or a method/constructor reference cannot steer the inference
process towards the solution anymuch. They cannot constrain the type
variables in any form (well they can constrain them to be interfaces and
functional interfaces at that, but I doubt if that constraint is actually
to be used actually.) Because these are not typed until after overload 
resolution is over, these can at best take part in a post inference 
compatibility check. If so, I would think LambdaExpression.isCompatibleWith
and the soon to come LambdaExpression.isMoreSpecificThan(type1, type2)
should be good enough.

Part F also has this: "if an argument is a lambda expression that cannot yet be typed due to unfinished inference (thus, the method is considered provisionally applicable), we conservatively make no attempt to compare the two methods; typically, an ambiguity error would be the result."

I'll hold on these "core dumps" until after I have read up on G.
Comment 24 Stephan Herrmann CLA 2013-03-05 20:01:24 EST
Given the new version 0.6.2 of the spec I finally find myself in a position where I can start serious work in this bug.

Indeed the missing parts as called out in comment 3 have been filled in, to the degree that the basic framework of the algorithm can now be implemented, which is what I just did.

As of now I only sketched a very narrow integration into the existing code, not worrying about particular call chains and retries and watnot, focusing just on the inference engine.

The good news: this engine is already able to give some correct answers, as witnessed by an initial experiment:
- Invoke the new inference only from PGMB.computeCompatibleMethod(), and if successful use its result instead of calling the old inference.
- Run GenericTypeTest at 1.8
Result: in 410 situations the new inference successfully kicked in, 118 of which caused the corresponding test to fail.
Should I call this 70% complete? Probably not :)

The next road-block is in sect. 18.2.3 which defines how subtyping constraints are reduced. For parameterized types it instructs you 
   "let C<B1, ..., Bn> be the parameterization of C for S, as defined below; ". 
If you look below, you see: 
   "To do: define the parameterization of a class C for a type T;...".

This could well be the reason for most of the test failures mentioned above as I had to make a lot of guesses in this area.

I've pushed my current draft to a feature branch http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference 
Please expect lots of TODOs inside.
Comment 25 Srikanth Sankaran CLA 2013-03-06 13:04:14 EST
I offer to review this once a near final form is available. Likewise
I'll request review from you/Jesper on the parts of 335 I worked on once 
they are near completion. This review can be scheduled at convenience 
without any time pressure.
Comment 26 Stephan Herrmann CLA 2013-03-07 08:50:12 EST
Status update based on a code instrumentation experiment:

in version 0.6.2 JLS 18.2.3 says:
"If T is an array type, T'[], then let S'[] be the most specific array supertype of S, as defined below; ...
 To do: ... define the most specific array supertype of a type T."

I instrumented the code to report as error all test cases that trigger this scenario, resulting in the following outcome for GenericTypeTest:
- 56 errors
- 92 failures
=> more than 1/3 of all non-passing tests in GenericTypeTest are due to this single incompleteness.
Comment 27 Stephan Herrmann CLA 2013-03-07 11:45:31 EST
I'm now confident that the newly implemented infrastructure is capable of performing real type inference :)

(In reply to comment #25)
> I offer to review this once a near final form is available. 

I'd say work in the sub-task bug 400982 is ready for an initial review any time now, see bug 400982 comment 2.
Up-to this point my focus was on providing the general infrastructure.
At this point I'm shifting gears towards addressing individual issues for specific code/type situations.

Clearly, it is impossible to resolve this bug using spec version 0.6.2, but a good milestone has been reached already and a good deal of detail work is now enabled.
Comment 28 Stephan Herrmann CLA 2013-10-13 11:51:58 EDT
After closing the foundational bug 400982, a few words on the status of the implementation:

Several locations would throw UnsupportedOperationException if called, all these are routed through InferenceContext18.missingImplementation() for easier reference. None of these are currently triggered when running all compiler.regression tests. This may be due to the fact, that the implementation is currently integrated *only* into ParameterizedGenericMethodBinding.computeCompatibleMethod(..).

In the code that does not call missingImplementation() a few locations are marked FIXME or TODO where more investigation is needed. If neither of these marks can be found then the code in question is considered complete by me.

The current score wrt all compiler.regression tests at level 1.8:
0 Error / 134 Failures (+/- a couple of non-deterministic issues).

I know of the following causes for failure (but haven't looked at all failures):
- unspecified integration of unchecked conversions into 18.5.1 / 18.5.2.
- some var arg issues (regarding pass-through of an array)
- error messages leaking internal info etc. How to turn inference failure
  into a nice user friendly message is a cute sub-project on its own right,
  see bug 404675

A message has been sent to the EG: http://mail.openjdk.java.net/pipermail/lambda-spec-experts/2013-October/000384.html seeking further clarification in various questions.

The current status is available in the feature branch sherrmann/NewTypeInference for the curious to inspect. For best use of my time I might actually wait for the next spec update, since - again - open spec issues seem to be responsible for the bulk of remaining problems. Debugging regressions and recognizing that I'm not entitled to fix them sounds like a waste of time - with one exception, I might already spend a few hours on preparing for better error reporting ...
Comment 29 Stephan Herrmann CLA 2013-11-14 06:19:10 EST
When revisiting the impact of raw types on inference, we should also look at the existing differences between javac and ecj reported in bug 402237.
Comment 30 Stephan Herrmann CLA 2013-11-21 12:23:38 EST
As I'm using GenericTypeTest as my main test suite, let me highlight my current status against these tests:

In http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=0a4d3bd078f11e6acd061d904f26e9a17caa1140
I adjusted several tests, where inference used to fail, but the new inference actually finds a valid solution.

Next I disabled all tests the are failing because the new inference no longer supports unchecked conversions. While I'm awaiting answer from the EG, whether or not Java 8 will actually bring this change (not yet implemeneted in javac), I marked this bunch of tests with the prefix _UNCHECKED_.

Another request for clarification can be found here: http://mail.openjdk.java.net/pipermail/lambda-spec-experts/2013-November/000428.html
For reasons unknown to me I cannot see how the spec would produce the desired result. While waiting for clarification, tests affected by this issue are disabled with the prefix _EG_.

Disablement has been pushed to http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=ded8cb0e1c42745a274b70bc918e0d2a7ba33533

So, after disabling 43 out of 1993 tests, I'm down to 84 failures in this suite.

In comparison to comment 28 we have more failures now, mostly because invocation type inference has been enabled in the mean time.
Comment 31 Stephan Herrmann CLA 2013-11-21 15:37:32 EST
The previous filtering of tests missed several. Pushed an update to http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=3f8dd32bda29becea2fc0074fafbcf7f15c83bf6

Status from originally 1493 (sic) tests in GTT
- 46 are disabled _UNCHECKED_
- 39 are disabled _EG_

Remaining regressions: 45

During these tests
- inference succeeds 405 times
- inference fails 66 times

From the remaining failures (ca. 10% of all uses of inference?!) 
- a significant number is due to details of error reporting -> defer
- some show incompatibilities involving either captures or fresh type
  variables from the new inference.
  Some of these might be caused by the "_EG_" issue.
- some more can still be related to unchecked conversions.
Plain inference bugs I could not identity at this level of scrutiny.


Anyway, with fewer regressions than tests waiting for spec clarification I'll stop this game for now.
Comment 32 Stephan Herrmann CLA 2013-11-28 08:58:09 EST
(In reply to Stephan Herrmann from comment #31)
> Status from originally 1493 (sic) tests in GTT
> - 46 are disabled _UNCHECKED_
> - 39 are disabled _EG_
> 
> Remaining regressions: 45

I received clarification on the EG list via http://mail.openjdk.java.net/pipermail/lambda-spec-experts/2013-November/000429.html

I've adjusted my implementation accordingly and enabled tests previously marked _EG_, result:

GTT (1448 tests):
- 46 are disabled _UNCHECKED_
Remaining regressions: 39 failures

TestAll (compiler.regression) (7901 tests):
- disablement as above
Remaining regressions: 6 errors, 73 failures

Here's the corresponding commit: http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=2b2a09cc41484ec62e8a6f3a4fb7d1e79e26e9eb
Comment 33 Stephan Herrmann CLA 2013-11-28 12:22:32 EST
Test results were still oscillating, one reason being: on one code path inference failure would cause us to fall back to the old inference. After disabling this fallback I get:

GTT (1448 tests):
- 46 are disabled _UNCHECKED_
Remaining regressions: 78 failures

TestAll (compiler.regression) (7901 tests):
- disablement as above
Remaining regressions: 6 errors, 137 failures

Back to analysing causes of failures.
Comment 34 Stephan Herrmann CLA 2013-11-29 20:17:31 EST
Updates:
--------
- implemented two phase check (loose & vararg invocation, not yet: strict)
- started to implement javac bug re unchecked conversion in illegal situations
- re-enabled all tests previously marked as _UNCHECKED_
- make lub() computation capable of handling the null type

Test Status:
------------
GTT (1494 tests):
- all specific disablement reverted
Remaining regressions: 41 failures

Many of these failures actually relate to *how* exactly we report an error.


TestAll (compiler.regression) (7964 tests):
Remaining regressions: 5 errors, 67 failures (incl. the 41 from above)

The 5 errors simply signal that some code is missing for inferring lambdas.

We might be on the home stretch already for this chapter (viz. using the new engine to infer types of generic method calls).

-----------

Branch has been merged with head from BETA_JAVA8, which required adjustments regarding Expression.isPertinentToApplicability(). One part being: these methods need to cope with targetType==null.

Srikanth: do you think it's OK to return true in that case??


Current state has been pushed as http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=d6921eb70cb6869e11ea480cc7a189bb4aac3bbc
Comment 35 Srikanth Sankaran CLA 2013-11-30 04:13:02 EST
(In reply to Stephan Herrmann from comment #34)

> We might be on the home stretch already for this chapter (viz. using the new
> engine to infer types of generic method calls).

That is terrific progress and great news ! Thanks Stephan. 

> Branch has been merged with head from BETA_JAVA8, which required adjustments
> regarding Expression.isPertinentToApplicability(). One part being: these
> methods need to cope with targetType==null.
> 
> Srikanth: do you think it's OK to return true in that case??

Yes, isCompatibleWith should answer false and isPertinent* should answer true.
BTW, the latter method is also missing handling of TVB from 15.12.2.2 i.e:

If m is a generic method and the method invocation does not provide explicit 
type arguments, an explicitly-typed lambda expression for which the 
corresponding target type (as derived from the signature of m) is a type 
parameter of m.
Comment 36 Stephan Herrmann CLA 2013-11-30 16:13:39 EST
In bug 422064 I'm tracking general issues re comparing ecj and javac behavior against GenericTypeTest. In this comment I'm listing those deviations that are introduced as of 1.8:

Obvious:
========
AbstractRegressionTest.JavacTestOptions.JavacHasABug.IntArrayAsTypeBound:
javac8 no longer complains at
  public class X<T extends int[]> { }
!!!

Need investigation:
===================

Newly rejected by javac8, many of which I suspect to be fresh javac bugs:
- test1167
- test1045
- test1167
- test1273
- test1275
- test1278
- test1325
- test1406
- test1407
- test1429
- test1434
- test1381
- test1386
- test1387
- test1388
- test0921

Newly rejected by javac8, probably conforming to the spec:
- test277643
- test283306
Ecj in my branch would normally reject these, too, but I had to apply a hack (against the SPEC) in order to fix approx. 20 other deviations of javac, which now breaks these two !!!!!

Errors no longer found by javac8 (still found by ecj)
- test1421
- test1445
- test0803
Comment 37 Stephan Herrmann CLA 2013-11-30 16:15:16 EST
(In reply to Stephan Herrmann from comment #36)
> In bug 422064 I'm tracking general issues re comparing ecj and javac
> behavior against GenericTypeTest.

Oops, wrong number, should read: bug 422896.
Comment 38 Stephan Herrmann CLA 2013-11-30 20:32:54 EST
Updates since comment 34:

Improved handling of rawtypes:
- implement additions in 18.2.2 as of 0.7.0

Initial handling of nested inference to allow an outer target type to influence the type of an inner poly expression. Currently only for MessageSend. Two design decisions may be worth discussing:
- when resolving an inner invocation finds it applicable as varargs,
  store this flag so that re-checking in the outer context only needs 1 phase
- after outer inference I re-bind any inner poly expressions, currently:
  assign new #binding and #resolvedType with improved types to an inner
  MessageSend
During outer inference the pre-resolved PGMB of an inner is actually unwrapped to freshly start from unbound type variables.

Analysed deviations between javac & ecj (see comment 36)

Updated several tests to expect better inference.

Analysed and classified all remaining regressions in GTT:
-difference only in error message, some only in how inferred types are printed
  -> 28 tests
- missing warning or error (also javac fails to show some warnings here)
  -> 5 tests
- inappropriate errors
  -> 5 tests

Let me just fix the latter 5 + 5 tests (where it's clear what should be expected) before I'll move on to addressing lambdas and method references.

The corresponding commit is http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=b782c3ef6e216cdcd365dba6077a7cf049bcd30e
Comment 39 Srikanth Sankaran CLA 2013-11-30 22:12:42 EST
Created attachment 237885 [details]
Patch that may have some useful APIs

This is a core dump of some prototyping work I did for https://bugs.eclipse.org/bugs/show_bug.cgi?id=401783. It is
stale by 10 days worth of changes. 

You may want to browse through the patch to see if there are elements 
that could be leveraged and if so after review release them into 
BETA_JAVA8 and merge them into your branch.

Expression.isBoxingCompatibleWith 
PolyExpression Overrides of Expression.resolveTypeExpecting

etc may be useful. If these APIs could be useful and you have problems
selectively applying them, add a note in https://bugs.eclipse.org/bugs/show_bug.cgi?id=419315 and I'll work on it.
Comment 40 Srikanth Sankaran CLA 2013-11-30 22:17:07 EST
(In reply to Stephan Herrmann from comment #38)

> MessageSend. Two design decisions may be worth discussing:
> - when resolving an inner invocation finds it applicable as varargs,
>   store this flag so that re-checking in the outer context only needs 1 phase

See org.eclipse.jdt.internal.compiler.ast.Expression.tagAsEllipsisArgument()
Polyexpressions are already notified of this, but TypeBinding.isCompatibleWith
used by overload resolution code in Scope.findMethod returns a boolean.
Perhaps the overload resolution "engine" could be changed to use another
version of isCompatibleWith that would return a level ala Scope.parameterCompatibilityLevel
Comment 41 Srikanth Sankaran CLA 2013-12-01 09:20:46 EST
(In reply to Srikanth Sankaran from comment #40)

> See org.eclipse.jdt.internal.compiler.ast.Expression.tagAsEllipsisArgument()
> Polyexpressions are already notified of this, but
> TypeBinding.isCompatibleWith

> used by overload resolution code in Scope.findMethod returns a boolean.
> Perhaps the overload resolution "engine" could be changed to use another
> version of isCompatibleWith that would return a level ala
> Scope.parameterCompatibilityLevel

Scope.parameterCompatibilityLevel already takes care of unboxing the array
type to see element compatibility as witnessed by the passing of 
org.eclipse.jdt.core.tests.compiler.regression.OverloadResolutionTest8.testVarargs()
Comment 42 Stephan Herrmann CLA 2013-12-01 12:30:08 EST
I've pushed another bunch of changes as http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=983d7c23e3ef9522672506642e4f4250a6d45571

At this point I declare a first milestone to be completed.
The status is best described along the remaining regressions in GTT along with pending questions on the EG list (use the all-caps term for searching in the test class):

SHOULD FAIL
23 tests *pass* to be conform with javac but violating the spec (re unchecked conversions), see
http://mail.openjdk.java.net/pipermail/lambda-spec-experts/2013-December/000443.html

FAIL FIXME
The "fix" for the above breaks 4 other tests. Help needed from the EG list.

FAIL ERRMSG
several tests correctly detect an error but the error message should be reconsidered.

MISSING WARNINGS
3 tests should probably show more warnings re unchecked conversions.

EXTRA ERR
1 test reports an error which seems mandated by the spec, but none of the existing compilers report this. This is an issue of array-of-wildcard, should it be captured? See http://mail.openjdk.java.net/pipermail/lambda-spec-experts/2013-December/000444.html

EXTRA ERR
1 test fails due to missing integration into normal resolve().

IMHO, only the last one of the above issues calls for direct action from us, to be discussed in a subsequent comment. All others can be deferred until either we get answers in the EG list or until implementation reaches a polish phase.


This means I can now enter the next stage where I will include reference expressions and lambdas into the inference.
Comment 43 Stephan Herrmann CLA 2013-12-01 13:11:41 EST
(In reply to Stephan Herrmann from comment #42)
> EXTRA ERR
> 1 test fails due to missing integration into normal resolve().

Please see GenericTypeTest.test1031():
A regular invocation contains a poly expression as its argument.
When we resolve the inner we don't have an expected type (aka target type), hence inference produces sub-optimal results. We have no second round of inference, because the outer invocation does not require inference. At the end the sub-optimal inference result doesn't fit the outer call.

Looking at usage of PolyTypeBinding I wonder if the following is intended:
- initial resolving of the inner MessageSend produces a PolyTypeBinding
- then we lookup the outer method
- resolvePolyExpressionArguments() will do the rest.
Is this how it should work?

If so, then the missing piece would be: MessageSend never resolves to a PolyTypeBinding, only these 4 do:
- AllocationExpression
- LambdaExpression
- ReferenceExpression
- ConditionalExpression
Should I add a stanza to the beginning of MessageSend.resolveType() that reads similar to the start of AllocationExpression.resolveType() (the early exit using PolyTypeBinding)??


On a similar note I had a quick look at tagAsEllipsisArgument() but couldn't exactly figure out its meaning: is this for the last actual argument that is being passed into a varargs method? Why is that needed?

(In reply to Srikanth Sankaran from comment #41)
> Scope.parameterCompatibilityLevel already takes care of unboxing the array
> type to see element compatibility as witnessed by the passing of 
> org.eclipse.jdt.core.tests.compiler.regression.OverloadResolutionTest8.
> testVarargs()

In part G the parameterCompatibilityLevel() mechanism is of little help, because we can't decide varargs vs. direct passing by looking at one parameter in isolation. We need to run the full inference for strict/loose invocations. If it finds a solution, we're fine, if not it should (hopefully) fail fast and a subsequent inference will look for a varargs solution.

I still hope, that for the strict/loose distinction we can be less pedantic.


So, for part G I need to know at each message send how I want to check for applicability (strict, loose, variable-arity). For the time being I extended the flag mentioned in comment 38 into an int, to record what kind of inference (if any) found the current method candidate. This is now used also to short-circuit the obsolete parameterCompatibilityLevel() checks if inference has worked instead. We should eventually prune / short-circuit even more of the old implementation, but I'm still trying to keep my changes to existing code at a minimum.

We may even keep both inference machines in separate universes, so we wouldn't have to squeeze any legacy hacks into the new machine. Let's see how this spells out in real life...
Comment 44 Stephan Herrmann CLA 2013-12-01 13:25:19 EST
Srikanth, if you have a minute I'd appreciate a comment regarding this little change: http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/diff/org.eclipse.jdt.core/compiler/org/eclipse/jdt/internal/compiler/lookup/TypeBinding.java?h=sherrmann/NewTypeInference&id=1832aee5d00f04e89e8ecf939e65051ef1f988cd

In GTT.test1060() I encountered a situation where isTypeArgumentContainedBy() needs to be answered regarding a WildcardBinding whose bound is an IntersectionCastTypeBinding. The change achieves what it is intended for and doesn't break existing regression tests, but working with "? super A & B" is not my daily business, so a second opinion would be great :)
Comment 45 Srikanth Sankaran CLA 2013-12-01 13:51:07 EST
(In reply to Stephan Herrmann from comment #43)
> (In reply to Stephan Herrmann from comment #42)
> > EXTRA ERR
> > 1 test fails due to missing integration into normal resolve().
> 
> Please see GenericTypeTest.test1031():
> A regular invocation contains a poly expression as its argument.
> When we resolve the inner we don't have an expected type (aka target type),
> hence inference produces sub-optimal results. We have no second round of
> inference, because the outer invocation does not require inference. At the
> end the sub-optimal inference result doesn't fit the outer call.
> 
> Looking at usage of PolyTypeBinding I wonder if the following is intended:
> - initial resolving of the inner MessageSend produces a PolyTypeBinding
> - then we lookup the outer method
> - resolvePolyExpressionArguments() will do the rest.
> Is this how it should work?

Yes - Please see the attached patch for a skeleton
of how MessageSend would/could play as a poly expression. Unlike other poly 
expressions, a generic method invocation cannot be deduced to be one till deep
into PGMB.computeCompatibleMethod. In the prototype I did, I am throwing
a PolyTypeException when we need the expected type, but don't have one yet.
This allows the MessageSend to declare itself to be a PolyExpression and
return a PolyTypeBinding to its caller.

In general, PolyExpression handling happens in 3 phases:

(1) Initial resolution : resolveType(BlockScope) If in INVOCATION_CONTEXT and
an expected type is missing, the poly expression immediately returns a
PolyTypeBinding as its type.

(2) The abstraction that originally invoked the resolution of a poly expression
is always a call i.e one of (MessageSend, AllocationExpression, 
QualifiedAllocationExpression, ExplicitConstructorCall).

This abstraction proceeds into overload resolution "normally" albeit after
recording the fact some of the argument expressions are poly expressions.

Overload resolution (Scope.findMethod) itself is (mostly) unaware of 
PolyTypeBinding (other than the part about sIsMoreSpecific()) and simply
goes its its job by invoking isCompatibleWith. isCompatibleWith on
the PolyTypeBinding deflects to the PolyExpression.

So the poly expression compatibility is tried out one by against the
candidates and if we are still left with > 1, we go into most specific method
search.

isCompatibleWith (and resolveTypeExpecting from the patch - not yet on
the branch) may be called many times and may have to resolve the poly expression
against the target type offered by the matching positional parameter type
of the candidate method. It is crucial that these multiple resolutions should
not dirty the original node. This is easily achieved by carefully segregating
"once only" code (e.g everything to the left of :: in a reference expression)
from "potentially once for every target type" code. Not so easily
achieved for LE, so we actually go back to the parser and ask it to materialize
a fresh copy: see LE.copy()

(3) Eventually resolution against the real target type or error handling.

> If so, then the missing piece would be: MessageSend never resolves to a
> PolyTypeBinding, only these 4 do:

[...]

> Should I add a stanza to the beginning of MessageSend.resolveType() that
> reads similar to the start of AllocationExpression.resolveType() (the early
> exit using PolyTypeBinding)??

Yes, the patch has this capability.

> On a similar note I had a quick look at tagAsEllipsisArgument() but couldn't
> exactly figure out its meaning: is this for the last actual argument that is
> being passed into a varargs method? Why is that needed?

This is to flag that the last formal parameter is a vararg parameter.
So the poly expression can know. 

See org.eclipse.jdt.internal.compiler.ast.FunctionalExpression.setExpectedType(TypeBinding)
Comment 46 Srikanth Sankaran CLA 2013-12-01 14:08:16 EST
(In reply to Stephan Herrmann from comment #43)

> In part G the parameterCompatibilityLevel() mechanism is of little help,
> because we can't decide varargs vs. direct passing by looking at one
> parameter in isolation. We need to run the full inference for strict/loose
> invocations. If it finds a solution, we're fine, if not it should
> (hopefully) fail fast and a subsequent inference will look for a varargs
> solution.

I think this is something like the distinction between NFA/DFA, Scope.findMethod
implements a "parallel" model for 15.12.2.2, 15.12.2.3 and 15.12.2.4

I haven't yet had a chance to look through your branch or that specific portion
of the spec to know if PGMB.computeCompatibleMethod could similarly parallelize.
Comment 47 Stephan Herrmann CLA 2013-12-03 06:25:07 EST
(In reply to Srikanth Sankaran from comment #46)
> (In reply to Stephan Herrmann from comment #43)
> 
> > In part G the parameterCompatibilityLevel() mechanism is of little help,
> > because we can't decide varargs vs. direct passing by looking at one
> > parameter in isolation. We need to run the full inference for strict/loose
> > invocations. If it finds a solution, we're fine, if not it should
> > (hopefully) fail fast and a subsequent inference will look for a varargs
> > solution.
> 
> I think this is something like the distinction between NFA/DFA,
> Scope.findMethod
> implements a "parallel" model for 15.12.2.2, 15.12.2.3 and 15.12.2.4
> 
> I haven't yet had a chance to look through your branch or that specific
> portion
> of the spec to know if PGMB.computeCompatibleMethod could similarly
> parallelize.

If you are recommending to run applicability checks for strict/loose context in parallel with those for variable-arity context: I doubt that this will be an improvement - if possible: each inference produces lots of effects into the bound set(s) on which it operates. The spec explicitly states what can be shared (result of reduction) and cannot (result of resolve) and this part works fine.

IOW: I'm fine with doing these checks in sequence and don't see possibility for substantial simplification.

The more fundamental issue, which I'd like to co-operate on, is the integration of PolyTypeBinding for MessageSends with type inference, possibly coordinated by throwing an exception when we detect the poly nature of the message send.

Srikanth, please let me know when you have some cycles to work on this.
Comment 48 Srikanth Sankaran CLA 2013-12-03 07:04:53 EST
(In reply to Stephan Herrmann from comment #47)

> IOW: I'm fine with doing these checks in sequence and don't see possibility
> for substantial simplification.

OK, I'll stop harping on this point :)

> The more fundamental issue, which I'd like to co-operate on, is the
> integration of PolyTypeBinding for MessageSends with type inference,
> possibly coordinated by throwing an exception when we detect the poly nature
> of the message send.

Sure, take a look at the patch I posted already, it is work in progress and
can stand for quite a bit of polish. I haven't studied the problem fully well
to know if better models than throwing exception is possible.

BTW, pertinent to what you noted elsewhere about weak inference due to the 
absence of target: the patch throws an exception only when target type is
absolutely required. This may not be correct for Java 8.

> Srikanth, please let me know when you have some cycles to work on this.

Sure: Code assist (https://bugs.eclipse.org/bugs/show_bug.cgi?id=422468) is
killing me :) In just a couple of days I should be done.
Comment 49 Stephan Herrmann CLA 2013-12-03 07:50:54 EST
(In reply to Srikanth Sankaran from comment #48)
> (In reply to Stephan Herrmann from comment #47)
> > The more fundamental issue, which I'd like to co-operate on, is the
> > integration of PolyTypeBinding for MessageSends with type inference,
> > possibly coordinated by throwing an exception when we detect the poly nature
> > of the message send.
> 
> Sure, take a look at the patch I posted already, it is work in progress and
> can stand for quite a bit of polish. I haven't studied the problem fully well
> to know if better models than throwing exception is possible.

I think I need your help to identify and polish the parts needed for inference.

> > Srikanth, please let me know when you have some cycles to work on this.
> 
> Sure: Code assist (https://bugs.eclipse.org/bugs/show_bug.cgi?id=422468) is
> killing me :)

OH NO!!!

> In just a couple of days I should be done.

So we have an appointment for Thursday, it seems :)
Comment 50 Stephan Herrmann CLA 2013-12-03 18:31:24 EST
Time to take stock of changes since comment 42 (current corresponding commit is http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=5204e01613cd10256db86b8603542424c3ad02ee)

I made a full round of all compiler.regression.TestAll, fixing various issues on the way:

- implemented reduction of LambdaExpressions

- impl. CaptureBinding18.isCompatibleWith(..): compatible to upper bound

- implement null checking of '@NonNull T' if T is inferred to 'null'.
  (we don't want to create a binding for '@NonNull null', do we? :) )

- unify handling of MessageSend, AllocationExpression, ExplicitConstructorCall
  using new abstraction "Invocation"; use only this during inference.
  Treat ctor.declaringClass as the invocation's returnType.
  => better inference for diamonds

- in javadoc super-ctor analysis was getting too smart with this inference,
  need to use asRawMethod to analyse parameter correspondence.

- analysis for redundant type parameters must now fake a diamond to trigger
  inference, which depends on isPolyExpression().

- when resolving a ReferenceExpression in the middle of inference, we may
  end up calling InferenceVariable.isCompatibleWith(). In such situations
  its best to consider inference variables as compatible to all.
  (Inference will never directly ask this, and inference variables should
  never escape inference)

- Lots of adjustments of test, mostly because inference finds more/better
  solutions.

In all of compiler.regression.TestAll I'm down to approx. 44 failures (some are intermittent, like depending on random order in error messages). NONE of these failures are blocking IMHO. Lots of diamond, lambda and reference expression situations are already correctly handled.

Hot on my agenda: hook up inference for conditional expressions, which, too, gets a lot smarter in 1.8.
Comment 51 Srikanth Sankaran CLA 2013-12-03 22:59:22 EST
(In reply to Stephan Herrmann from comment #47)

> The more fundamental issue, which I'd like to co-operate on, is the
> integration of PolyTypeBinding for MessageSends with type inference,
> possibly coordinated by throwing an exception when we detect the poly nature
> of the message send.

See that we can't throw an exception to signal the poly nature of the message send
(e.g as done in the prototype on the other bug) for two reasons: (e.

    - This in itself does not allow for constraints to be lifted up. Even if
      this is solved:
    - There could be more than one potentially applicable generic method at
      a nested call site. If the first one throws an exception and manages
      somehow bubble up its constraints at the aggregated equation solving site,
      we will be missing the constraints from the 2 and subsequent potentially
      applicable generic methods.

Perhaps it is a bit early to be thinking about it, but It looks to me we need new 
abstractions: UnresolvedParameterizedGenericMethodBinding and a PolyMethodBinding.
Perhaps UnionTypeBinding and a beefed up IntersectionTypeBinding.
Comment 52 Stephan Herrmann CLA 2013-12-05 11:50:55 EST
Look Ma, no exceptions!
:)

I've pushed  http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=b2095c86e12d72811582952d0e8a54029849500d

In that commit you can see a solution that allows us do the required lifting of inference constraints from an inner invocation to its outer - all without throwing any inference specific exceptions, but essentially by performing inference by the book:

If inference happens in a context that is not VANILLA_CONTEXT and if the expected type is still unknown (null), we only perform invocation applicability inference (see the change in ParameterizedGenericMethodBinding) *not* invocation type inference.

When an invocation resolves its arguments & finds that the argument is an invocation with inferenceKind() > 0 (i.e., inference is involved), then we remember polyExpressionSeen (using the existing local) (see changes in AllocationExpression, MessageSend & ExplicitConstructorCall).

In all invocations I extracted the part doing the MethodBinding lookup into findConstructorBinding(), resp., MethodBinding.findMethodBinding(). In these twin methods I simply do:

  lookup
  if (polyExpressionSeen)
    if (resolvePolyExpressionArguments(..))
      lookup

For this to work I modified resolvePolyExpressionArguments(): If argument is an Invocation perform Invocation Type inference (the part previously skipped). If successful, update the method binding in the invocation and also update one type in argumentTypes[] for downstream soundness checks. True is returned if an update has been performed to signal that method binding lookup should be repeated.

Some infrastructure additions/changes to support the above:

- Store the InferenceContext18 in all Invocations to resume inference later
- Expose expressionContext and inferenceContext from all Invocations

Also contained: two corrections in CaptureBinding18:
- Enable method lookup through a capture binding according to 15.12.1 (old)
- Consider all upper bounds during compatibility checks, treat as intersection

This commit fixes bug 423070 plus a regression in GTT.test1031()
Comment 53 Stephan Herrmann CLA 2013-12-05 19:26:10 EST
*** Bug 423070 has been marked as a duplicate of this bug. ***
Comment 54 Stephan Herrmann CLA 2013-12-07 07:35:55 EST
Notes about integrating ReferenceExpression into the inference:

The body of work for resolving ReferenceExpression has already been done including some trickery with PolyTypeBinding etc., nice work! What's needed for inference is just flipping a few switches:

- all functional expressions are always poly as per part D intro (0.7.0)

- reference expressions need to provide an InferenceContext18 to participate in inference

- we need to differentiate two kinds of asking expectedType(): normally its exactly what type the outer context provided as expected type, but when a FunctionalExpressions mimics as an InvocationSite for inference, we pretend an actual *invocation* of the SAM. Hence in this context the target type for method lookup is the *return type* of the expected SAM. (Usage by CompletionEngine should be unaffected by this - hence splitting into two methods).

- expect the situation where during inference RE.resolveType finds a SAM referencing an inference variable in its signature. Need to short-circuit isCompatibleWith() calls, since we can't yet give an answer.

The above has been implemented as per http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=7bf7a1c5b4e8c6a986d5bd01c34d7901a6b364a6


I see potential for optimization by caching the results of RE.resolveType() per expected type. I didn't yet implement this, because it wasn't immediately clear which fields need caching, simply caching only the #resolvedType didn't cut it.


A little insight from debugging:
I was surprised that inference sometimes happens in different nested levels, instead of lifting more constraints into the outer inference. The basic trigger for starting a nested inference is the first item in 18.2.1:
  "If T is a proper type, the constraint reduces to true if the expression is compatible in a loose invocation context with T"

For a ReferenceExpression computing compatibility indeed requires a nested inference. Lifting only happens, if T mentions any inference variables. The apparent reason behind this: if T is already known, lifting can no longer influence the inner inference, hence inference can indeed happen in isolation.
Comment 55 Stephan Herrmann CLA 2013-12-07 10:53:39 EST
For the existing test GenericsRegressionTest.testBug415734() the new inference actually found a solution, but I was alerted by the facts that
(a) javac8 no longer accepts this
(b) our solution would break intermittently.

Turned out to be a positive bug (spec bug) and half a negative bug (implementation bug) to yield just half a bug.

I've filed bug 423496 to follow up, once a new spec rule becomes available.
Comment 56 Stephan Herrmann CLA 2013-12-07 15:57:20 EST
I've filed two new bugs for unimplemented parts of the spec:

Bug 423504 - [1.8] Implement "18.5.3 Functional Interface Parameterization Inference"

Bug 423505 - [1.8] Implement "18.5.4 More Specific Method Inference"


I don't see any existing test cases affected by the lack. I'd highly welcome if s.o. contributes some test cases for these areas :)
Comment 57 Stephan Herrmann CLA 2013-12-07 16:23:17 EST
I made a full round of cleanup with special focus on changes made in existing classes. These I consider to be ready for review any time now - at your convenience, Srikanth.
Comment 58 Stephan Herrmann CLA 2013-12-07 18:47:16 EST
After clarification on the EG list [1] I wrapped up the task around GenericTypeTest.test0939():

Inference must generally include type parameters of enclosing types during subtype reduction (18.2.3 bullet 5.2 fails to mention this). Fixed and test improved by invoking a method on the inferred type (this technique challenges the inference result without pushing an expected type into the inference, which would skew the outcome).

This fix caused many regressions in GenericsRegressionTest_1_7, mostly related to bug 345968. The symptom was that the above fix pulled "alien" type variables into the inference, which could not be resolved, because they do not participate in substitution with inference variables. This could be fixed inside Scope.getStaticFactory(..): be careful to not lose the concrete type argument from originalEnclosingType. In this situation, allocationType is the generic type and thus using allocationType.enclosingType() pulled in the non-inferrable type variable.

[1] http://mail.openjdk.java.net/pipermail/lambda-spec-experts/2013-December/000449.html
Comment 59 Srikanth Sankaran CLA 2013-12-08 18:40:02 EST
*** Bug 401783 has been marked as a duplicate of this bug. ***
Comment 60 Stephan Herrmann CLA 2013-12-10 19:54:49 EST
Here's some significant progress driven by the example from bug 419048:

Fixed an assumed spec bug see http://mail.openjdk.java.net/pipermail/lambda-spec-experts/2013-December/000460.html
(ask me how I found that one, phew!)

Improved Scope.substitution to handle inference variables which represent arbitrary types, not just type variables. For this purpose I invented a nested class Scope.Substitutor and created an indirection from the existing static methods, in order to enable OOP, viz. overriding one variant of substitute with our own preamble.


What's more complex in terms of code changes: how we detect how much work resolveType should perform for poly expressions at different points in time, when to report/suppress error reports, how much to clean up after tentative resolves ...

To start organize this business I invented two methods in Expression:
- resolveTentatively(BlockScope scope, TypeBinding targetType)
- checkAgainstFinalTargetType(TypeBinding targetType)
These roughly correspond to the two main phases of inference.

For phase 1 a few more methods are introduced in FunctionalExpression:
- findCompileTimeMethodTargeting(TypeBinding targetType, Scope scope)
- internalResolveTentatively(TypeBinding targetType, Scope scope)
- cleanUpAfterTentativeResolve()
It turned out, *a lot* of clean up is necessary to avoid influencing subsequent work. Notably, LE.cleanUpAfterTentativeResolve() even has to remove any traces of lambda arguments which may have been resolved to the wrong type, including references to these arguments from the lambda body ...

In ReferenceExpression I stumbled over an exactMethodBinding that still needed to apply explicit type arguments - not sure if this is intended??


Another dimension relates to marking inference progress into poly expressions, to avoid spurious resolving after inference has already given a definite answer. The tiny new interface PolyExpression lets clients set and query this information. Currently the following classes participate in this protocol: FunctionalExpression, AllocationExpression, ExplicitConstructorCall, MessageSend.


Much of the above is combined into more smarts in ASTNode.resolvePolyExpressionArguments(). Here we coordinate type information from an outer lookup (inference?) with re-resolving inner expressions (arguments) and finally triggering another method lookup of the outer with new argument types. See comments in that method.


A related method is InferenceContext18.rebindInnerPolies(), which propagates type information from an outer inference into poly expressions that have been included in the inference. Also this method has been significantly improved. 


With this in place we successfully infer types across multiple levels of poly invocations / lambdas.

The commit corresponding to this status is http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=d72b33df7e926bdba104646a573cf60af359abd4
Comment 61 Srikanth Sankaran CLA 2013-12-10 20:12:43 EST
(In reply to Stephan Herrmann from comment #60)
> Here's some significant progress driven by the example from bug 419048:
> 
> What's more complex in terms of code changes: how we detect how much work
> resolveType should perform for poly expressions at different points in time,
> when to report/suppress error reports, how much to clean up after tentative
> resolves ...

Stephan, did you look at my earlier patch for generic methods in message send ?
That had a prototype for resolveTypeExpecting which I think probably serves the 
same purpose as the tentative resolve you have in mind. That would have taken
care of clean ups because for lambda's it operates on a copy().

> For phase 1 a few more methods are introduced in FunctionalExpression:
> - findCompileTimeMethodTargeting(TypeBinding targetType, Scope scope)
> - internalResolveTentatively(TypeBinding targetType, Scope scope)
> - cleanUpAfterTentativeResolve()


Again, there was a ReferenceExpression.resolveTypeExpecting which handled
cleanups. 

The resolveTypeExpecting and isCompatibleWith were intended to be used for
multiple look ups with no stale data being carried over between lookups.

> In ReferenceExpression I stumbled over an exactMethodBinding that still
> needed to apply explicit type arguments - not sure if this is intended??

A method reference is exact only if explicit type arguments are provided
for a generic method.
Comment 62 Stephan Herrmann CLA 2013-12-10 20:38:22 EST
Looking at the example referenced in bug 418807:

The collect(..) call is now accepted, but now we have trouble with the constructor reference:

----------
1. ERROR in /tmp/Word.java (at line 20)
        Stream<Word> ws = names.stream().map(Word::new);
                                         ^^^
The method map(Function<? super String,? extends R>) in the type Stream<String> is not applicable for the arguments (Word::new)
----------
2. ERROR in /tmp/Word.java (at line 20)
        Stream<Word> ws = names.stream().map(Word::new);
                                             ^^^^^^^^^
The constructed object of type Word is incompatible with the descriptor's return type: R
----------

I didn't yet pay much attention to constructor references, sounds like I should look into these, next.
Comment 63 Stephan Herrmann CLA 2013-12-10 20:52:25 EST
(In reply to Srikanth Sankaran from comment #61)
> (In reply to Stephan Herrmann from comment #60)
> > Here's some significant progress driven by the example from bug 419048:
> > 
> > What's more complex in terms of code changes: how we detect how much work
> > resolveType should perform for poly expressions at different points in time,
> > when to report/suppress error reports, how much to clean up after tentative
> > resolves ...
> 
> Stephan, did you look at my earlier patch for generic methods in message
> send ?
> That had a prototype for resolveTypeExpecting which I think probably serves
> the 
> same purpose as the tentative resolve you have in mind. That would have taken
> care of clean ups because for lambda's it operates on a copy().

I had a brief look at that, yes. Maybe that would have killed my bird, too, but I didn't understand that patch well enough for adopting it.
I may have missed one or two details but basically my latest commit seems to achieve the same without copying

Maybe this approach is too naive, but it helped me figure out the effect on inference and fix bugs there. I'm happy to discuss those integration parts as soon as you have some cycles to review portions of my code.
 
> > For phase 1 a few more methods are introduced in FunctionalExpression:
> > - findCompileTimeMethodTargeting(TypeBinding targetType, Scope scope)
> > - internalResolveTentatively(TypeBinding targetType, Scope scope)
> > - cleanUpAfterTentativeResolve()
> 
> 
> Again, there was a ReferenceExpression.resolveTypeExpecting which handled
> cleanups. 
> The resolveTypeExpecting and isCompatibleWith were intended to be used for
> multiple look ups with no stale data being carried over between lookups.

Yes, actually I started by copying from one of these methods - and added more cleanup as it showed necessary. So the business of switching error reporting policies is directly copied from there.

 
> > In ReferenceExpression I stumbled over an exactMethodBinding that still
> > needed to apply explicit type arguments - not sure if this is intended??
> 
> A method reference is exact only if explicit type arguments are provided
> for a generic method.

Good, so shouldn't these type arguments be applied before assigning exactMethodBinding? According to my observations an env.createParameterizedMethod(..) call was still necessary before using what I found in exactMethodBinding.
Comment 64 Srikanth Sankaran CLA 2013-12-10 23:04:31 EST
(In reply to Stephan Herrmann from comment #63)

> Maybe this approach is too naive, but it helped me figure out the effect on
> inference and fix bugs there. I'm happy to discuss those integration parts
> as soon as you have some cycles to review portions of my code.

I think you did the right thing by proceeding by putting together a version
that meets your needs so you are not blocked. Yes, let us discuss this later 
at the time of code review to ensure that we have a minimal, coherent APIs.


(The way I had envisioned (without having understood part G fully) was that
nobody outside of LE would use anything other than isCompatibleWith,
resolveTypeExpecting and sIsMoreSpecific - i.e result expressions,
void/value compatibility checks etc would happen implicitly without there
being separate APIs - don't know that this is possible, certainly for thrown
exceptions we need something because the threesome mentioned does not copy
with exceptions.)

> Good, so shouldn't these type arguments be applied before assigning
> exactMethodBinding? According to my observations an
> env.createParameterizedMethod(..) call was still necessary before using what
> I found in exactMethodBinding.

yes, this is an oversight that needs to be fixed.
Comment 65 Stephan Herrmann CLA 2013-12-11 18:28:41 EST
(In reply to Srikanth Sankaran from comment #64)
> (The way I had envisioned (without having understood part G fully) was that
> nobody outside of LE would use anything other than isCompatibleWith,
> resolveTypeExpecting and sIsMoreSpecific - i.e result expressions,
> void/value compatibility checks etc would happen implicitly without there
> being separate APIs - don't know that this is possible, certainly for thrown
> exceptions we need something because the threesome mentioned does not copy
> with exceptions.)

Inference needs a few kind-of unexpected queries. I wondered how to best identify all calls from a given set of classes into another set of classes. Shouldn't JDT be the right tool to thus identify all calls "from inference" "into AST"? :)

OTOH, the number of such calls isn't large, we should be able to find and examine all these locations during code review. In fact I consider the new API introduced yesterday more important than it's implementation.
Comment 66 Stephan Herrmann CLA 2013-12-12 13:59:04 EST
*** Bug 423839 has been marked as a duplicate of this bug. ***
Comment 67 Stephan Herrmann CLA 2013-12-12 16:25:05 EST
The next batch of changes was driven by bug 423839, which features a diamond inside a ternary inside a generic invocation (status corresponds to http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=c03bb7b6b1cd8a6d883d640a383ab0d92ad4b43b )

To cope with ternary expressions I had to overhaul ASTNode.resolvePolyExpressionArguments() (candidate to being pushed into AST classes). This method now
- descends into ConditionalExpression
- returns a marker POLY_ERROR TypeBinding to signal that we should update 
  nothing because some inner poly has an error
- detects poly allocations although they no longer have a PolyTypeBinding
  (s.below)


To let diamond allocations fully participate in inference (similar to generic methods) I made the following changes:
- AllocationExpression.resolveType(..) no longer returns early with a
  PolyTypeBinding but does a best effort at resolving a method binding as
  a starting point for inference.
- This required to use isPolyExpression() in more places rather than
  'instanceof PolyTypeBinding'
- Type parameters will be inferred only later, using invocation type inference
- To avoid type check errors in this preliminarily resolved state I derive
  the candidate method from a raw type
- Rawness warnings are suppressed during this phase by skipping
  checkInvocationArguments()
- diamonds now answer false to isPertinentToApplicability
  (*assuming* that this is what the spec intends to say)


Inside the inference engine diamonds are considered:
- Recognize diamonds
- Include class level type variables in inference (not mentioned in spec?)
- When all is said and done, InferenceContext18.rebindInnerPolies() updates
  class level type arguments of inner diamonds with inference solutions.


We are now doing even more duplicate resolves. Since resolve isn't safe against duplicate invocations, I started implementing a general Expression.unresolve(), to be overridden appropriately - currently only in NameReference. Called for message receivers and arguments at this point. Candidate to replacement by a copy mechanism, whichever works out better.

Finally, I made a start at improving error reporting: by connecting ProblemMethodBindings to InferenceContext18, and inner contexts to outer contexts, ProblemReporter.invalidMethod() can now see if an outer context exists and defer reporting to that context. No big deal yet, but may well support the idea to delegate much error reporting to the InferenceContext eventually.

We have one more regression now: OverloadResolutionTest8.test032() which doesn't seem to perform full inference against both candidate methods.
Comment 68 Stephan Herrmann CLA 2013-12-12 17:50:56 EST
While fixing bug 414631 I had to make these changes:

Before introducing a lambda body or its result expressions into inference, we need to resolve them (sigh!), because inference needs to know the type. This will require more unresolving or working with the lambda's copy...

During reduction of an expression compatibility constraint, we need to check whether the expression is a poly expression. When the expression is the body of a lambda that's both value compatible and void compatible then we don't know the expression context (vanilla or assignment), but we need to assume 'assignment' to be able to infer a corresponding solution.
Comment 69 Stephan Herrmann CLA 2013-12-12 17:52:42 EST
*** Bug 414631 has been marked as a duplicate of this bug. ***
Comment 70 Srikanth Sankaran CLA 2013-12-12 19:51:59 EST
(In reply to Stephan Herrmann from comment #67)

> We are now doing even more duplicate resolves. Since resolve isn't safe
> against duplicate invocations, I started implementing a general
> Expression.unresolve(), to be overridden appropriately - currently only in
> NameReference. Called for message receivers and arguments at this point.
> Candidate to replacement by a copy mechanism, whichever works out better.

I went down that path earlier - it is full of land mines. I think given there
are just a handful of poly-expressions, on a case by case basis we should look
at (a) whether their resolveType methods could be clearly organized into
"once-only" vs "once-per candidate target type" evaluations and then look at
the latter path to see the state to involved to decide whether we should 
save/restore or (b) opt for a copy.

For example for ReferenceExpressions, everything falls neatly in place that
we don't copy() them, neither do need a unresolve() capability. For Lambda's
we go for copy()/

Earlier comments in this bug and elsewhere have documented the problems with
approaches to resolution clean up.
Comment 71 Stephan Herrmann CLA 2013-12-14 10:44:36 EST
Work on bug 424038 further improves the interaction between type inference and LambdaExpression & AllocationExpression.

- I was wrong in providing AllocationExpression.isPertinentToApplicability(), these are *never* pertinent ...

- LambdaExpression.isPertinentToApplicability() was missing a check for the case of single expression bodies, fixed.

- Inference now always pretends an assignment context when resolving against a known target type.
This method (ConstraintExpressionFormula.ensureResolved(..)) may, however, be obsoleted by the following change:

- Before inference reduces a compatibility constraint against a lambda expression, we now obtain a copy using a new method LambdaExpression.getResolvedCopyForInferenceTargeting(TypeBinding). This method is a modified extract from LambdaExpression.isCompatibleWith(..). Here copying seems to be the exact right thing to do, because inference will produce some bindings into this copy, which are not final results and should be discarded after inference. In contrast to isCompatibleWith the new method allows inference to perform several checks on the copy and produce new constraints against the copy's resolved result expressions. See ConstraintExpressionFormula.reduce(..) for usage.

Status corresponds to http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=sherrmann/NewTypeInference&id=24dd8ebf7370eee61f1fead599354a5fdf56a8e1
Comment 72 Stephan Herrmann CLA 2013-12-14 10:46:05 EST
*** Bug 424038 has been marked as a duplicate of this bug. ***
Comment 73 Stephan Herrmann CLA 2013-12-14 10:52:47 EST
Moving remaining child bugs to a new home in bug 424053.
Comment 74 Stephan Herrmann CLA 2013-12-14 11:02:05 EST
(In reply to Stephan Herrmann from comment #71)
> - I was wrong in providing
> AllocationExpression.isPertinentToApplicability(), these are *never*
> pertinent ...

Oops, should say: "*always* pertinent", as answered by Expression.isPertinentToApplicability()
Comment 75 Stephan Herrmann CLA 2013-12-14 15:24:32 EST
The first phase of implementing type inference according to part G of JSR 335 spec has been completed. I pushed all changes into BETA_JAVA8 via http://git.eclipse.org/c/jdt/eclipse.jdt.core.git/commit/?h=BETA_JAVA8&id=6940f5238f5f044dc7dc7f5472f64a2c5617a365

Follow-up work incl various levels of reviewing will be organized via bug 424053 into which I will soon extract all those questions from this bug that are still relevant.

This terminates the development in branch sherrmann/NewTypeInference, which is, however, left on the server to preserve the (fine grained) history.