Bug 115935 - Proposal for a new *DT "Project Model"
Summary: Proposal for a new *DT "Project Model"
Status: RESOLVED FIXED
Alias: None
Product: CDT
Classification: Tools
Component: cdt-core (show other bugs)
Version: 3.0   Edit
Hardware: All All
: P3 enhancement with 2 votes (vote)
Target Milestone: 4.0 RC0   Edit
Assignee: Mikhail Sennikovsky CLA
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2005-11-10 19:01 EST by Leo Treggiari CLA
Modified: 2007-04-27 03:44 EDT (History)
43 users (show)

See Also:


Attachments
Proposal for a new Project Model (350.00 KB, application/octet-stream)
2005-11-10 19:08 EST, Leo Treggiari CLA
no flags Details
UI Proposal (179.00 KB, application/octet-stream)
2005-11-21 23:20 EST, Leo Treggiari CLA
no flags Details
Common Build System Considerations (122.84 KB, application/pdf)
2005-12-01 08:00 EST, Walter Brunauer CLA
no flags Details
Slides for Dec. 8 discussion (PowerPoint) (390.00 KB, application/octet-stream)
2005-12-07 13:33 EST, Leo Treggiari CLA
no flags Details
Slides for Dec 8 Discussion (PowerPoint) (404.50 KB, application/octet-stream)
2005-12-07 13:36 EST, Leo Treggiari CLA
no flags Details
Photran-based patch for superficial multilanguage support in model (20.46 KB, patch)
2006-03-10 02:36 EST, Jeffrey Overbey CLA
no flags Details | Diff
Move from C/C++ to generic "Compiled Language" (1.26 KB, text/plain)
2006-04-11 12:55 EDT, Craig E Rasmussen CLA
no flags Details
Initial draft for the CDT New Project Model design (404.50 KB, application/msword)
2006-07-21 10:32 EDT, Mikhail Sennikovsky CLA
no flags Details
CDT New Project Model UI design (702.00 KB, application/msword)
2006-09-15 17:36 EDT, Mikhail Sennikovsky CLA
no flags Details
Initial draft patch for the New Project Model functionality (619.83 KB, application/zip)
2007-01-23 14:21 EST, Mikhail Sennikovsky CLA
no flags Details
Comments for the core part of the New Project Model patch (51.00 KB, application/msword)
2007-01-23 14:23 EST, Mikhail Sennikovsky CLA
no flags Details
Comments for the UI part of the New Project Model patch (28.00 KB, application/msword)
2007-01-23 14:23 EST, Mikhail Sennikovsky CLA
no flags Details
New Project Model patch update (634.63 KB, application/zip)
2007-01-26 13:29 EST, Mikhail Sennikovsky CLA
no flags Details
New Project Model update v0.3 (839.77 KB, application/zip)
2007-02-06 16:37 EST, Mikhail Sennikovsky CLA
no flags Details
New Project Model update v0.4 (672.46 KB, application/zip)
2007-02-09 14:22 EST, Mikhail Sennikovsky CLA
no flags Details
New Project Model patch agains the latest HEAD (883.53 KB, application/zip)
2007-02-13 13:51 EST, Mikhail Sennikovsky CLA
no flags Details
gif icon files (17.87 KB, application/octet-stream)
2007-02-13 16:22 EST, Mikhail Sennikovsky CLA
no flags Details
New Project Model update with Core backward compatibility support implemented (908.72 KB, application/zip)
2007-02-16 17:34 EST, Mikhail Sennikovsky CLA
no flags Details
gif icon files to the latest patch (17.87 KB, application/zip)
2007-02-16 17:37 EST, Mikhail Sennikovsky CLA
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Leo Treggiari CLA 2005-11-10 19:01:43 EST
Attached is a proposal for a new "project model" for CDT and a *DT 
infrastructure that could be extracted from CDT.  It is based upon discussions 
held at the fall CDT Conference.  This would be a significant change to CDT, 
but I consider it an evolution of the existing model and implementation.  
Everyone's input is welcome.  If accepted, with or without modifications, an 
implementation plan would have to be developed.
Comment 1 Leo Treggiari CLA 2005-11-10 19:08:30 EST
Created attachment 29742 [details]
Proposal for a new Project Model
Comment 2 Markus Schorn CLA 2005-11-11 08:14:38 EST
Project Natures:

I think there is both a need to add both build-system dependent project 
property pages and language dependent ones. An examples of a language dependent 
property is the indexer configuration or the decision on how to treat a header 
file (c or c++) within a project.

A CDT-project than will have the CDT-Nature, can have one (maybe also more) 
build-natures (managed, make, ..) and can have multiple language natures 
(C/C++, Fortran, ...).

Then we can have wizards for various projects (e.g. managed make C/C++, 
makefile fortran, makefile multi-language). A wizard for a mult-language 
project would let the user select from the available language natures.

At the same time we can make provision for adding/removing languages to the
project (i.e. adding the appropriate language natures).
Comment 3 Doug Schaefer CLA 2005-11-11 08:40:15 EST
Markus, that's how things work today. We have natures for language and build 
system.

The problem, as I see it, is that language is not a project concept. It is a 
file concept and the Platform has given us Content Types to deal with that. We 
need to make sure all of our functionality lines up with it, which I'm not 
sure we do today.

The only caveat I've run into is, as you mention, header files. However, our 
gut feel has been that the indexer should be able to determine what language a 
header file is given which files have included it. I'll make sure the new PDOM 
indexer can do this.

Looks like a very interesting and detailed proposal, Leo. I'll have to give it 
a good read by the fire this weekend (man it's cold up here right now :).
Comment 4 Markus Schorn CLA 2005-11-14 03:35:28 EST
Sure enough language shall be determined per file via the content types. The 
reason why I do consider natures for languages is an UI question:

Do we we agree that every project has the same project property pages? For 
example, is it ok to present the C++-Indexer page within a fortran project? The 
more languages are built on top of CDT the more of a problem this could become.
 
Comment 5 Leo Treggiari CLA 2005-11-14 08:25:05 EST
(In reply to comment #4)
> Sure enough language shall be determined per file via the content types. The 
> reason why I do consider natures for languages is an UI question:
> Do we we agree that every project has the same project property pages? For 
> example, is it ok to present the C++-Indexer page within a fortran project? 
The 
> more languages are built on top of CDT the more of a problem this could 
become.
>  

I agree.  I think the important question becomes "does the project contain a 
C/C++ source file?"  One implementation of this could just check every time the 
question is asked.  But that could be too time consuming to use for "ui 
filtering".  Another implementation could use project natures, and keep them up 
to date as source files are added to and removed from the project.  That 
requires some complex logic, but it may be the way to go.

Comment 6 Doug Schaefer CLA 2005-11-14 08:26:18 EST
Good point, Markus. I suppose, in the ideal world where our indexer is multi-
language, then this wouldn't be an issue. But yes, there will be some C/C++ 
tooling, and Fortran as well I guess, that will need to be enabled as 
necessary. But you should be able to enable both C/C++ and Fortran on a given 
project since we do need to support multi-language projects. Automating this 
as Leo suggests would be interesting as well.
Comment 7 Leo Treggiari CLA 2005-11-14 10:23:21 EST
Looking through the Web Tools Project (WTP) release review slides, they talk 
about "flexible projects".  The text below is from a paper describing WTP.  Is 
anyone familiar with exactly what they have done and whether it would be a good 
model for *DT?

"The JDT project model is not hierarchical, and the "exploded archive" 
structure for J2EE projects—one module per project—is not very flexible. 
Flexible Layout eliminates project migration and enables WTP to coexist with 
existing directory structures. You can create EJB projects, Web service 
projects, and additional Web projects, and add these to the same enterprise 
application and/or other enterprise applications. Enterprise application 
projects "assemble" the associated EJBs, Web services, and Web projects into an 
EAR and deploy it as a single unit. "
Comment 8 Leo Treggiari CLA 2005-11-21 23:20:38 EST
Created attachment 30359 [details]
UI Proposal

I am attaching a user interface proposal that supports the proposed "project model".  Please provide feedback on both proposals.  I am trying to get a discussion going on some of the main issues discussed at the Fall Conference.  No discussion likely means nothing will happen...
Comment 9 Doug Schaefer CLA 2005-11-24 20:11:16 EST
This is very timely. I am taking a hard look at how to give the DOM a language independent interface that will allow the other CDT features such as the editor features (content assist in particular) and source navigation features (Search, Open Decl/Def, etc) to be language independent.

My thinking is, as you've expressed as well, is to base parsing on Eclipse's content type. I'd have an extension point that would register language specific information gatherers against content types. To pick the language, all I need is an ITranslationUnit and can figure out everything from there.  Once I'm in this model I have access to the rest of the Project model to get the necessary build configuration using the mechanisms you've proposed. This will also deal with the different types of TU's we have to handle, IFiles, external TUs such as external headers, and the working copy since they all implement ITranslationUnit.

In fact, we already have a stub method on ITranslationUnit to get the IASTTranslationUnit from it. All I need to do is finish it up.

So +1 from me.
Comment 10 Walter Brunauer CLA 2005-12-01 07:56:05 EST
Some general considerations regarding build systems with respect to the proposal:

I understand, this proposal should cover arbitrary compiled language builds, which of course is an enormous requirement. I wonder, if all common/typical as well as enhanced/complex setups/scenarios are considered from a users as well as implementors point of view:

What about multiple applications and archives being represented within the same project? Imagine a huge, complex source tree containing all sources of a complex software system ending up in multiple executables and libraries: as the Eclipse Resource API restricts hierarchical nested project setups, it might be impossible to reflect such systems without reorganizing the whole source tree (which is definitely not what users would accept easily). I understand that the proposal would only allow one build target per project (assuming a folder being defined as subproject in fact is another project context?).

What about supporting multiple operation systems, development (host) platforms as well as target platforms? How would different tool chains provided on different development platforms be integrated? How would heterogenous tool chains (spanning several languages, like a Assembler/C/C++ tool chain combined with Fortran and ADA tool chains) be represented on different host platforms, or in general?

What about tool chain independence, if e.g. the user wants to use a special cross compiler not based on GNU, and does not know anything about Eclipse, plugins, or even Java, and no integration is available yet? I understand this is somehow considered, but I would like to see a lot more details on this. It should be possible to easily "feed" the managed build system with the command lines representing the tool chain calls. If the user knows the tool chain by heart, he does not necessarily need a fancy user interface to change some compiler options in the first place.

Isn't the general issue that one wants to simply define what to build, rather than what to define as (a) project(s)? In other words, isn't a project only an aritifical entity with respect to a build system? Isn't it more about being able to explicitely define the contents of any build output and how to get from source files over objects to archives or executables, maybe even across project boundaries, maybe reusing intermediate build outputs like static libraries being linked to multiple executables? Of course, even more sophisticated build processes might need to be covered, maybe multiple steps including source generation, compiling, archiving and linking, extracting, relinking, mapping, deployment, etc. In difference to Java, where typically only compilation and archivation is needed, it would be really nice to be able to setup such build processes in a more agnostic/generic way.

Some months ago, I wrote a white paper about common build system considerations, which I want to share as further input to the discussion.
Comment 11 Walter Brunauer CLA 2005-12-01 08:00:33 EST
Created attachment 30934 [details]
Common Build System Considerations

White paper about common build system considerations.
Comment 12 Doug Schaefer CLA 2005-12-01 10:47:37 EST
The managed build system was designed to be able to handle arbitrary tools that deal with any content types the user may need to build their project. Anything that is there now that doesn't help achieve that goal have only been temporary solutions while we put it all together.

I think we have answers to a lot of your questions and I'm sure Leo is going through them right now. I think it may be more constructive, though, to raise specific bugs against the MBS where it doesn't meet your needs. It would be easier to come up with an action plan if we have specific work items that need to be done.
Comment 13 Leo Treggiari CLA 2005-12-01 12:37:17 EST
(In reply to comment #10)
> Some general considerations regarding build systems with respect to the
> proposal:
> I understand, this proposal should cover arbitrary compiled language builds,
> which of course is an enormous requirement. I wonder, if all common/typical as
> well as enhanced/complex setups/scenarios are considered from a users as well
> as implementors point of view:

The MBS is not meant to cover ALL possible build scenarios.  It's meant to provide an easy way to build projects for a user who does not want to get into learning make or other build utilities.  If the user DOES want to use make or other build utility, then the Standard Builder provides the capability to invoke any command.  I specifically do not want to sacrifice the usability of the common case in order to support arbitrarily complex cases.

Here is a summary of the "philosophy" of the proposed "project model":

1.  Eclipse provides the overall project and resource model - work with it.
2.  The tool-chain integrator can provide a lot of information about the tool-chain which relieves each user from having to provide the information.  Give them a way to provide that information once for all of *DT.
3.  The user has to provide as little as possible.

> What about multiple applications and archives being represented within the same
> project? Imagine a huge, complex source tree containing all sources of a
> complex software system ending up in multiple executables and libraries: as the
> Eclipse Resource API restricts hierarchical nested project setups, it might be
> impossible to reflect such systems without reorganizing the whole source tree
> (which is definitely not what users would accept easily). I understand that the
> proposal would only allow one build target per project (assuming a folder being
> defined as subproject in fact is another project context?).

The proposal allows for multiple "library" sub-projects in separate folders.  If you want to propose additional functionality (e.g., multiple executables), please provide a proposal with the impact on the project model and the UI.

> What about supporting multiple operation systems, development (host) platforms
> as well as target platforms? How would different tool chains provided on
> different development platforms be integrated? How would heterogenous tool
> chains (spanning several languages, like a Assembler/C/C++ tool chain combined
> with Fortran and ADA tool chains) be represented on different host platforms,
> or in general?

I believe that the MBS can already handle this.  The current CDT UI can't handle parts of it, but I believe the proposed UI does.

> What about tool chain independence, if e.g. the user wants to use a special
> cross compiler not based on GNU, and does not know anything about Eclipse,
> plugins, or even Java, and no integration is available yet? I understand this
> is somehow considered, but I would like to see a lot more details on this. It
> should be possible to easily "feed" the managed build system with the command
> lines representing the tool chain calls. If the user knows the tool chain by
> heart, he does not necessarily need a fancy user interface to change some
> compiler options in the first place.

My suggestion to the user:  write a batch file and invoke it from the standard builder.  MBS is not providing any value in this case.

> Isn't the general issue that one wants to simply define what to build, rather
> than what to define as (a) project(s)? In other words, isn't a project only an
> aritifical entity with respect to a build system? Isn't it more about being
> able to explicitely define the contents of any build output and how to get from
> source files over objects to archives or executables, maybe even across project
> boundaries, maybe reusing intermediate build outputs like static libraries
> being linked to multiple executables? Of course, even more sophisticated build
> processes might need to be covered, maybe multiple steps including source
> generation, compiling, archiving and linking, extracting, relinking, mapping,
> deployment, etc. In difference to Java, where typically only compilation and
> archivation is needed, it would be really nice to be able to setup such build
> processes in a more agnostic/generic way.
> Some months ago, I wrote a white paper about common build system
> considerations, which I want to share as further input to the discussion.

Same answer as above.

I'll take a look at your white paper.

Thanks,
Leo
Comment 14 Walter Brunauer CLA 2005-12-05 10:53:03 EST
(In reply to comment #13)

> Here is a summary of the "philosophy" of the proposed "project model":
> 
> 1.  Eclipse provides the overall project and resource model - work with it.
> 2.  The tool-chain integrator can provide a lot of information about the
> tool-chain which relieves each user from having to provide the information. 
> Give them a way to provide that information once for all of *DT.
> 3.  The user has to provide as little as possible.

Thank you for sharing the philosophy. This helps a lot to better understand your proposal.

> The MBS is not meant to cover ALL possible build scenarios.  It's meant to
> provide an easy way to build projects for a user who does not want to get into
> learning make or other build utilities.  If the user DOES want to use make or
> other build utility, then the Standard Builder provides the capability to
> invoke any command.  I specifically do not want to sacrifice the usability of
> the common case in order to support arbitrarily complex cases.

> The proposal allows for multiple "library" sub-projects in separate folders. 
> If you want to propose additional functionality (e.g., multiple executables),
> please provide a proposal with the impact on the project model and the UI.

Supporting building multiple applications within the same project seems neither to be extreme complex nor unusual with respect to usability, and of course the user should not need to know anything about how to set this up manually himself - the managed build system should provide this capability.

I will try to come up with something. Should I create a new Bugzilla entry for it?

> > What about tool chain independence, if e.g. the user wants to use a special
> > cross compiler not based on GNU, and does not know anything about Eclipse,
> > plugins, or even Java, and no integration is available yet? I understand
> > this is somehow considered, but I would like to see a lot more details on
> > this. It should be possible to easily "feed" the managed build system with
> > the command lines representing the tool chain calls. If the user knows the
> > tool chain by heart, he does not necessarily need a fancy user interface to
> > change some compiler options in the first place.
> 
> My suggestion to the user:  write a batch file and invoke it from the standard
> builder.  MBS is not providing any value in this case.

I was referring to point 3 under "Binary Integrations" of your proposal: "They are loadable as an XML file found in a *DT defined directory (where?). This would allow the contribution of binary integrations (tool-chains, tools, external libraries), without having to write and install a plug-in."

I am particularly interested in this and willing to work on it. I will provide a draft here, how I envision such a tool-chain provider contribution, how an abstracted data interface could look like and a reader utility providing objects of these data structures to be accessable for any build system implementation.

Just let me know if you are interested.
Comment 15 Leo Treggiari CLA 2005-12-05 17:00:38 EST
>Supporting building multiple applications within the same project seems >neither to be extreme complex nor unusual with respect to usability, and of >course the user should not need to know anything about how to set this up >manually himself - the managed build system should provide this capability.
>I will try to come up with something. Should I create a new Bugzilla entry for
>it?

I think it would best best to continue the discussion here.  I've thought of a way for the "model" to support this.  

The root of the model in a tool integrator provided tool integration is the ProjectType.  The ProjectType has BuildConfiguration children which represent the default configurations that the user can select when creating a project.

The root of the model in a project file (i.e., currently .cdtbuild) is the ManagedProject.  The ManagedProject has BuildConfiguration children which are the actual build configurations defined in the project.

Currently, the ManagedProject object contains a reference to the ProjectType object that was selected when creating the project.  If we removed that reference from the ManagedProject and put it into the BuildConfiguration instead, then different BuildConfigurations could build different ProjectTypes.  All properties are currently set on the BuildConfiguration rather than the ManagedProject, so using a different "artifact name", and different build settings, in different BuildConfigurations is already supported - there are currently a few exceptions, e.g. binary parsers.

That would support multiple build types in the model, leaving the UI to consider.  When adding a new build configuration to a project, we could allow the user to select default configurations from other ProjectTypes (probably as an "Advanced" option).  We would also need a good way to select the source files to be used to build a particular configuration.  The current model is "exclusion", i.e., all source files in the project are used unless you explicitly exclude them.  We will be adding the ability to exclude a folder, but I'm not sure that this is the most usable UI model.

> > > What about tool chain independence, if e.g. the user wants to use a special
> > > cross compiler not based on GNU, and does not know anything about Eclipse,
> > > plugins, or even Java, and no integration is available yet? I understand
> > > this is somehow considered, but I would like to see a lot more details on
> > > this. It should be possible to easily "feed" the managed build system with
> > > the command lines representing the tool chain calls. If the user knows the
> > > tool chain by heart, he does not necessarily need a fancy user interface to
> > > change some compiler options in the first place.
> > 
> > My suggestion to the user:  write a batch file and invoke it from the standard
> > builder.  MBS is not providing any value in this case.
> I was referring to point 3 under "Binary Integrations" of your proposal: "They
> are loadable as an XML file found in a *DT defined directory (where?). This
> would allow the contribution of binary integrations (tool-chains, tools,
> external libraries), without having to write and install a plug-in."
> I am particularly interested in this and willing to work on it. I will provide
> a draft here, how I envision such a tool-chain provider contribution, how an
> abstracted data interface could look like and a reader utility providing
> objects of these data structures to be accessable for any build system
> implementation.
> Just let me know if you are interested.

Yes, I'm interested.  MBS currently has a simple interface for providing definitions dynamically, org.eclipse.cdt.managedbuilder.core.IManagedConfigElementProvider.  This has the problem discussed in bugzilla 91230, and automatically loading from a directory is not supported.
Comment 16 Doug Schaefer CLA 2005-12-07 08:45:35 EST
I have started up an interface in the core called ILanguage and will create one for the UI called ILanguageUI. The language selection is driven by the content type for the ITranslationUnit. Implementations of these interfaces are registered against content types in extension points.

I also plan on starting a couple of plugins for doing Ada, my pet favorite language, to help define what these interfaces need to contain to configure the CDT features to work with other languages. I'll use the gnu ada tools for building and debugging. I'll also work with the Photran gang to see if this mechanism will work for them as well.

I'll talk about this more at the *DT meeting but my gut is telling me that these two interfaces should suffice and they won't be very big. But I need to go through the exercise to make sure.
Comment 17 Leo Treggiari CLA 2005-12-07 13:33:33 EST
Created attachment 31324 [details]
Slides for Dec. 8 discussion (PowerPoint)
Comment 18 Leo Treggiari CLA 2005-12-07 13:36:57 EST
Created attachment 31325 [details]
Slides for Dec 8 Discussion (PowerPoint)

One minor edit...
Comment 19 Leo Treggiari CLA 2005-12-12 17:23:54 EST
Below are the notes that I took at the conference call.  Please reply with any additional comments.  The proposal was accpeted with the changes specified in the notes.  I and other Intel contributors will begin the process of further design and planning.  It is not yet known if any part of the proposal could be ready in time for the 3.1 release.  However, it is highly desirable that CDT get to the point where Photran can integrate without changing CDT by the 3.1 release.

o  Instead of "Logical projects" or "sub-projects", we will use the term "build targets".

o  We need to define a mechanism for plugging a new build system into CDT.

o  We need to allow a user to change the build system of an existing project.

o  We need to support the transfer of language specific information between parts of CDT (e.g. the builder and the parser).

o  We will continue to call the multi-language infrastructure "CDT" for now (rather than "*DT").

o  We will investigate using the .settings file for appropriate project information.

o  We will use the term "External SDKs" instead of "External Libraries".

o  Craig Rasmussen (LANL) is interested in helping to develop the "Deployment" model.

o  The consensus is that the New Project wizards and other views should not be language specific.  We could use the term "CDT" in these language independent UIs rather than "Make".

o  The project creation UI needs more thought and discussion.  For example, it must be possible to support other build systems.

o  There was a discussion regarding which "advanced" project settings need to be available at project creation (vs. project properties).  Doug pointed out that it must be possible to select the Indexer at project creation.

o  Only a single "CDT" project nature may be required to identify projects that use this model.  We should try to remove the need for language specific and build system specific natures.

o  We will use non-prefixed interface names and use the package id to qualify the name if necessary.

o  With regards to the UI changes, there will be a potential big impact on existing automated tests that are driven through the UI.
Comment 20 Mikhail Sennikovsky CLA 2005-12-15 10:04:39 EST
Doug,

I have a couple of questions to the CModel/PDOM functionality with respect to the new project model

The main question I have is how the Code Model (CModel/PDOM) will correlate with the configuration concept that will be presented to the CDT core (new Project Working Model)?

1. Will the code model (PDOM/CModel) information be per configuration? I think it should, since the Code Model info depends on the defined preprocessor symbols, include paths, tool-chains/tools used, etc. that could be different for different configurations. I assume also that different configurations might have different tool-chains or different external SDKs (libraries) associated, that provide some different pre-calculated PDOM information (for system includes, etc.). 

Assuming that the Code Model info could be different for different configurations:

2. How the mechanism for the Code Model information calculation/loading/persistence (parsing/indexing/etc.) should work 
  a. Will we have a separate PDOM database and in-memory Code Model representations for each configuration? Do you consider the possibility that some parts of the persisted information and in-memory Code Model could be the same for several configurations (e.g. parts of the pre-calculated PDOM supplied with the tool-chain definition or with external SDK (library, etc.))?
  b. We could also try to use one database and in memory Code Model representation for the several configurations in the project, where it is appropriate. (e.g. one and the same set of include paths/preprocessor symbols is defined for the several configurations, etc.) 
  c. When the project is loaded, should the Code Model info be loadded/calculated only for the default configuration (currently active configuration)? 
  d. Should it be possible to access/query the code model information for the configuration that is not currently active? (I don’t think it is necessary.)
  e. Will the Code Model information be present in memory only for the currently active configuration or could it be loaded to memory for several/all configurations in a project at once?
  f. How the Code Model is updated when some code modifications are performed: 
    i. Will Code Model be updated only for the currently active configuration? 
    ii. for each configuration in the project? 
    iii. only for those configurations that have the Code Model loaded in memory?
  g. How the Code Model is updated when some build settings changes are made (e.g. include paths, preprocessor symbols, etc.): 
    i. if the changes are made for the currently active configuration? (I think it should be updated)
    ii. if the changes are made for the configuration which is not currently active?
  h. In case the persisted PDOM information for some configurations will be not always in synch with the current code and settings (e.g. for the non-active configurations), we might need to define some flexible mechanism for checking the PDOM database validity when loading the Code Model information for some configuration, because some source code and/or include paths/preprocessor symbols settings for that configuration might be changed since the time when the PDOM information was calculated/persisted. We might also need to define some mechanism of what should be done when the currently active configuration is changed (when should we try to use the Code Model information calculated for some other configuration, or load the persisted PDOM information for that configuration, or recalculate the information, etc.) 

Thanks,
Mikhail
Comment 21 Doug Schaefer CLA 2005-12-15 11:35:48 EST
Great questions Mikhail. Until now, we've always used the active configuration when building up the CModel and index. I agree it is desirable to have configuration specifics there as well. The reason this has worked up until now is that the occurance of configuration specific declarations and references is pretty low in real code. But your right, we should be able to deal with it to ensure accuracy.
Comment 22 Mikhail Sennikovsky CLA 2005-12-16 08:31:28 EST
(In reply to comment #21)
Hi Doug, 

Thanks for the reply! 

I think we must make sure that each of us, working on the different parts of the new core models, keep synchronized and understand the whole picture of the new model to avoid and predict any potential conflicts and issues with the interoperations of the different model parts.

Let me point to the two potential problems from the new Project working model perspective that we might need to discuss:

1. We need to avoid/reduce the potential usability problems with changing the active build configuration and the build configuration settings. What is currently done in MBS is that when the include/symbol settings are changed for the active configuration, the complete project re-indexing is performed. I think it is the right behavior and we should not do anything with this. 
The other thing is that when the active configuration is changed, and the include/symbol settings differ for the new active configuration, the complete project re-indexing is also performed. 

>The reason this has worked up until now
> is that the occurance of configuration specific declarations and references is
> pretty low in real code. 
IMHO it is quite typical that different configurations have different include/symbol settings. In case of the most frequently used Debug/Release pare, the debug/release configurations typically specify/use some specific macros to include/exclude some additional debugging logic (tracing, checking values, etc.), or to use the different function implementations for debug and release, the less frequent case (which I think is also not very rare) is using different configurations for compiling the code against different target platforms. In this case it is also common to use different includes and preprocessor macros. Etc.
One of my guess regarding why we did not have the significant configuration-related issues previously is because we had the configuration concept only in the Managed Build and did not have it in the standard build, while (as far as I understand) the large projects with big amount of code are typically imported in CDT using the standard make, and also because users often switch off the full indexing after they see how long the indexing takes place at the project creation/loading, and do not have a chance to test how indexing works with all the other CDT functionality. So bringing the configuration concept to the core and to the standard make, we must make sure not to bring any additional significant usability issues, e.g. with switching the active configuration, etc. E.g. will it be acceptable that the full re-indexing takes place each time the active configuration is changed, etc.

So we might need to think about how to utilize the configuration concept the most effectively to avoid/reduce the possible usability issues.

2. We need to make sure we make all the necessary significant public API breakage at once. It is currently planned to move the path-entry and other configuration–specific stuff to the configuration level. If we decide that the Code Model will be configuration-specific, we might also need to move the Code Model root to the configuration level as well. I mean the code model part of the current CModel might move to the configuration level (I have not yet considered how we should do it or should we do it at all), as well as the PDOM root (should we move the IPDOM ICProject.getIndex() from the project to the configuration level like IPDOM ICBuildConfiguration.getIndex() or something like that?)

What do you think?

Thanks,
Mikhail
Comment 23 Pierre-Alexandre Masse CLA 2006-01-04 21:07:00 EST
Leo,

I finally read the proposal and it looks great. I have just a couple of minor comments:

- In the builder definition, would it be possible to separate makefileGenerator in 2. One for the makefile generation, and the second for the build(make) invocation? In case of remote build, the project would be local, the makefile generation shouldn't change, but we would just need to replace the build invocation. That would make remote builds a lot easier.

- You mention quick presentation of the model that you would like to expose the elements of this model to scripting languages for the purpose of automating. Any details on that? 

- Finally for the build configuration you add an ICFolderConfiguration. What are the reasons to not use an IResourceConfiguration element also for a folder. Also would there be some hierarchy between the folders? For example if I set specific settings for src/dir1, will I have the same settings for src/dir1/subdir1? Can I just change some of them? And why does the ICFolderConfiguration element needs an explicit reference to source root element? Wouldn't that be the first parent that is a source root? Maybe I just misunderstood this whole sections... :)

Thanks,

Pierre-Alexandre
Comment 24 Leo Treggiari CLA 2006-01-06 12:35:13 EST
Hi Pierre-Alexandre,

Thanks for the comments.

(In reply to comment #23)
> - In the builder definition, would it be possible to separate makefileGenerator
> in 2. One for the makefile generation, and the second for the build(make)
> invocation? In case of remote build, the project would be local, the makefile
> generation shouldn't change, but we would just need to replace the build
> invocation. That would make remote builds a lot easier.

We should be able to do that.  The current interface does not control invocation, just makefile generation.  Invocation happens in GeneratedMakefileBuilder.invokeMake.  We'll have to examine that routine to see which part(s) would be useful to replace.

> - You mention quick presentation of the model that you would like to expose the
> elements of this model to scripting languages for the purpose of automating.
> Any details on that?

No details, and I suspect it would not be done in an initial implementation.  The "model"s seem like the right set of objects to expose via an object oriented scripting language, but I haven't thought at all about what language (Java?) and how it would be implemented.
 
> - Finally for the build configuration you add an ICFolderConfiguration. What
> are the reasons to not use an IResourceConfiguration element also for a folder.

I don't think that I considered that.  We'll have to think about that some more to determine whether there is sufficient commonality between the 2 - there certainly may be.

> Also would there be some hierarchy between the folders? For example if I set
> specific settings for src/dir1, will I have the same settings for
> src/dir1/subdir1? Can I just change some of them? 

Yes, there will be a hierarchy of folders and a sub-folder can override whatever set of attributes that it wants to - it won't need to redefine all attributes.

> And why does the
> ICFolderConfiguration element needs an explicit reference to source root
> element? Wouldn't that be the first parent that is a source root? Maybe I just
> misunderstood this whole sections... :)

This area of the design could change as the design gets more detailed.  My thought was that the source-root object might contain more generic information and the ICFolderConfiguration would contain more build-specific information.  They would refer to the same actual file system folder.

Leo

Comment 25 Leo Treggiari CLA 2006-01-09 08:59:14 EST
There was some discussion in cdt-dev regarding adding a Debug object to the model.  I'm capturing some of that discussion here.

--------------------------------------------

Hi Mikhail, Pierre-Alexandre,

I could see adding the Debugger to the "model".  It doesn't have to be
part of the "tool-chain".  For example, we talked about external
libraries/SDKs being part of the model.  It seems to me that a
tool-chain integrator would normally want to provide/select the default
debugger for their tool-chain, so it would make sense for information
about a debugger to be provided in the same manner, and available to all
of CDT.  I would need some help in figuring out where to put the
debugger in the model, and defining the set of attributes and callbacks
that would fully define a debugger to CDT.

Regards,
Leo

-------------------------------------------

> ....
> I would need some help in figuring out where to put the
> debugger in the model, and defining the set of attributes and 
> callbacks
> that would fully define a debugger to CDT.
> ....

Probably a little premature for suggestions, but the actual "name" of
gdb for us would be dynamically calculated based on the current value of
one of the options in the project.  This actually applies for gcc, g++,
as, etc. too, which we currently handle with a custom makefile
generator, scanner info provider and collector.  We'd need to be able to
take a similar approach with debugger.  Just a nasty situation we should
consider. :)

  Jeremiah Lott
  TimeSys Corporation
Comment 26 Leo Treggiari CLA 2006-01-24 13:23:35 EST
FYI;  The new Eclipse Mobile Tools for Java (MTJ) project has a need for "build configurations".  It needs to support preprocessing Java and other different build attributes for different target devices.  I attended their creation review and they took an action item to talk to us when they design their build configuration support.
Comment 27 Jeffrey Overbey CLA 2006-03-10 02:36:20 EST
Created attachment 36042 [details]
Photran-based patch for superficial multilanguage support in model

I attached a patch which is basically equivalent to what I proposed at the CDT Developers' Summit last fall.  Some identifiers have changed--it extends Doug's ILanguage and LanguageManager now--but is otherwise the same.

Essentially, this allows us to supply our own model builder and have its elements show up in the Outline and Make Projects views.

Photran's corresponding ILanguage will just return null for all of the getTranslationUnit methods in ILanguage; so far that doesn't seem to be a problem.  Maybe later we'll have time to *actually* integrate those kinds of things...

Looking forward to discussing this and hopefully getting it integrated in the near future.
Comment 28 Craig E Rasmussen CLA 2006-04-11 12:55:00 EDT
Created attachment 38303 [details]
Move from C/C++ to generic "Compiled Language"

I've submitted a patch that takes baby steps toward a more language neutral CDT.  Currently the hover text on new project, folder, and file reports "New C/C++ ..."  The proposed change used "New Compiled Language ..."

This change more accurately represents CDT when new languages are added to the new project, new folder, and new file menus.  I think the icons are fine as they are.

In discussions with Leo, he indicated that he was no longer pushing his earlier patch (36042).  This patch would replace his and is a much more modest step.
Comment 29 Craig E Rasmussen CLA 2006-04-20 18:11:59 EDT
Comment on attachment 38303 [details]
Move from C/C++ to generic "Compiled Language"

I've been able to add Fortran branding to the taskbar by adding a new action set and a little bit of setup work in the Fortran perspective.  This works fine (so far) and no changes are needed to CDT.
Comment 30 Mark Melvin CLA 2006-06-22 17:11:13 EDT
We are very interested in a "compiled languages" set of tooling that we can extend as well.  We evaluated the CDT a few years ago and decided to not use it as the basis for our tools.  We are currently evaluating it again, and I am still leaning towards not using it for a few fundamental reasons (however the decision is much harder this time!).  I think maybe this is a good place to summarize my thoughts so far and get feedback.

First of all, I assume this new "project model" is still moving forward and active?  I am curious as to what people's feelings are as to when this will show up in CDT (or the new *DT...).  Is this realistically a CDT 4.0 thing, or is it farther in the future?  I'm reminded of the first EclipseCON and the languages BoF where we talked about a lot of similar things.

So, currently here is the start of a list I see as the major roadblocks for extending and using the CDT for non-ISO/gcc-based-C stuff.  I realize some (or most) of this has come up before but I'll just write it all down as I think of it...

1) Everything says C/C++ on it.  I realize this is the CDT, but if you are opening your MBS system up to the world (and it looks a *lot* more open than it used to be!) for things like assembly, Fortran, well...any compiled language really...this is a pain.  I prefer the concept of a more neutral, "Compiled Languages Core" that the CDT would be a consumer of.  I think any proper *DT plans moving forward would be a feature containing plugins providing a core framework of functionality - parsers, indexers, managed build system, toolchain specification, build configurations, include resolution, dependency calculators, etc. - and the CDT would be **an installable feature on top of that which contained a set of tools for ISO/gcc-based C/C++ development**.

Another key point here is "ISO/gcc-based C/C++".  This is a crappy name, but I think you know what I mean.  There are a great many of us who all have special, embedded devices that have some form of C-support that contains unique customizations or is a partial implementation of standard C.  And 99% of us embedded wackos don't care about C++. ;O)  So, right away there is a fairly large barrier to entry in terms of extra cruft we just don't want or need showing up in the IDE.  In terms of a shipping product, it is very confusing to users when they want to develop in assembly code, and they have to create a C/C++ project.  I think if you are going to go through the effort of factoring out C/C++ as a language, you need to also make the distinction that not all implementations of C are created equal.  Ideally, I would like to use the generic, compiled language support and not include any specific C-implementation support (and definitely not C++) except my own in my product (let's call it "My-C").  If my users want to install the gcc-based C/C++ tools that CDT offers then they would be free to do so.  However, then we'll get into issues with file extensions, and what editor to use, etc.  But that is another, separate hurdle.

2) The project wizards are based on C/C++ and make.  This has been discussed and you are talking about creating a "compiled language" project wizard that is more intuitve for other languages.  I'm all for this.  The problem I have now (correct me if I am wrong) is that while you can contribute all sorts of nice toolchains and your own wizard pages, and even your own build file generators - you still need to write your own wizard anyway because you don't want to start a new "Managed Make C/C++ Project" for an assembly code project that uses Ant to build itself.

3) You are pretty hard-wired to make or some other *external* tool driver.  This is still true in the UI document attached to this bug.  What if you don't want to use make?  OK, well right now you can specify your own builder in place of it, and you can even specify your own build file generator - but why do we have to be so tied to the name of an "external tool" that must magically be on your PATH?  What if we wanted to write our own build tool in Java, or perhaps hook into something through JNI?  Heck, you could even use the built-in Ant facilities in Eclipse to build your stuff with Ant - which uses an internal AntRunner class rather than calling Ant from the command line.  I think you need more flexibility here.  I think you should be able to specify through extension points either a "spawned process" type of model like you have now, or an API-based model where you could provide something in Java (ideally through the traditional Eclipse incremental project builder infrastructure), or even a scripting language for that matter.  
I think some of you are way ahead of me here anyway because you have started on an internal builder.  I'll tell you - if it works everyone will use it.  When is the last time you ran javac?  ;o)  99% of the people using managed make are doing it because writing build files and running compilers sucks.  Just build my files using the settings I have specified!  ;o)  And as soon as you get into spawning an external process, you have a large disconnect in terms of IDE integration.  Not only is error detection and reporting a pain (parsing console output), there are platform independence issues you need to worry about, incremental builds are more difficult, etc.
Anyway, this is not new to anyone reading this and it is not an easy problem to solve.  Personally I am very curious about your internal builder (I haven't been able to get it to work yet...).  Hopefully you're keeping languages other than C/C++ in mind when you are writing this thing?

Anyway - I think that is it for now in terms of roadblocks.  I'll move on to suggestions for the uber-future now. ;o)

a) Is there any reason why we couldn't have a database of installed "tools" and allow the user to mix and match to create their own toolchain at runtime, and then somehow export/persist this to this "binary integrations" format, and use them as future defaults in a project wizard?  Allow me to explain.

Let's say you provide a standalone tool that processes an .exe into something else - a post-processing application if you will that has a few command line args, takes an input file and generates an output file.  You also have a standard compiler and linker.  Well, maybe a user would like to set up a project template for future use in the new project wizard that allowed him to compile -> link -> postprocess, without having to manually set up an external tool builder to do the post-processing.
Here is another use-case.  What if - instead of requiring that a project always produce an .exe a user could set up a project with a toolchain that only compiled to object code (no linking).  Then that user could set up many projects that created relocatable object files, and a final project that referenced these projects and linked all of the modules together into a final executable.  A little obscure maybe - but are there any other use-cases for this sort of thing?  Basically what I am envisioning is setting up a custom, ordered list of tools into your own custom toolchain - but at runtime rather than defining this through extension points.  You could then use it as the basis for a new project in the future.  This could become useful for hybrid projects in the future as multi-core development becomes the standard.  Basically you can make your own project template with a custom toolchain made by chaining a few existing toolchains together.  I guess you could always ultimately do this through extension points...

b) Any implementation you come up with should be callable in headless mode so that you can build your projects without the UI in *exactly* the same way you can from within the UI, using the same builder(s).

c) Similar to the point I made previously - why focus on all of your tools spawning external tools?  What if you wanted to write your compiler or linker, or pre/post-processor in Java?  The MBS right now is pretty much built around the model of spawning some external executable.  I think the choice of running an API-based tool that could be written in Java (or another language callable from Java) would be a very useful option as well.

d) When thinking about sub-projects within a project (library-like sub-folders), would it be easier to just support splitting this on project boundaries instead, and having referenced projects combined with Team Project Sets?  I mean, it is nice that you could be able to have a compiler/librarian create a .lib in a subfolder of your project and then link it into an executable as part of your compiler/linker toolchain - but is it so bad to just use two projects in this case?  If it is a CVS-fetching issue maybe some more support in the area of Team Project Sets would be enough and more beneficial to others as well?

e) We find it useful to be able to build more than one build configuration at once as part of a regular build.  I think you'd still need the concept of a single build configuration to be used for include resolution, content assist, etc. - but would others find it useful to rebuild all configurations when a resource changes?  We do that now to compile multiple versions of code that produce slightly different behaviour and our developers seem to use it a lot.
This implies you could have multiple "active" build configurations (where active means it will be built), with a single build configuration used for "context" purposes.


I guess the big question for us is what is your target date for the new "project model"?  It is definitely something we would be interested in and would help out with but currently we are looking at avoiding the CDT once again because it lacks exactly what this new "project model" is trying to achieve.  It is kind of a catch-22 for us.
Comment 31 Mikhail Sennikovsky CLA 2006-06-23 08:23:26 EDT
(In reply to comment #30)
Hi Mark,

Thanks for a good set of suggestions and comments.
Yes, we are planning to implement the New Project Model for the 4.0, well at least the main parts of it to enable more reliable multi-language support and increase usability, configurability and consistency of integration in to the CDT.

I'm currently summarizing all my thoughts regarding this in the design document that I'm going to post to this bugzilla soon.

> 1) Everything says C/C++ on it.  I realize this is the CDT, but if you are
> opening your MBS system up to the world (and it looks a *lot* more open than it
> used to be!) for things like assembly, Fortran, well...any compiled language
> really...this is a pain.  I prefer the concept of a more neutral, "Compiled
> Languages Core" that the CDT would be a consumer of.  I think any proper *DT
> plans moving forward would be a feature containing plugins providing a core
> framework of functionality - parsers, indexers, managed build system, toolchain
> specification, build configurations, include resolution, dependency
> calculators, etc. - and the CDT would be **an installable feature on top of
> that which contained a set of tools for ISO/gcc-based C/C++ development**.
CDT is not intended to support only gcc-based C/C++, but is intended to be a common framework to be used for integrating different languages, build systems, etc. as well as providing the (default) support for the gcc-based C/C++ (gcc – specific parsers, gcc tool definitions, etc.)
I generally agree that the CDT functionality might be splitted into separate features that would include *DT feature, gcc C/C++ feature etc. but it seems more reasonable for the *DT feature to be just a common framework that would allow integrating different languages, build systems, etc. and to not include the MBS schema. I don’t think we should make the MBS schema inseparable from the core since different ISVs might want to integrate their custom Build Systems into the CDT, and AFAIK there are some custom Build Systems integrated into the CDT now. We do need to have though a common consistent interface for integrating the build system into the CDT that would allow us qwerying from /passing to the build systems the set of Build System common settings such as Includes/Macros/Libraries settings, Language Settings, Build Settings (Environment, Build output dirs, Current Builder dir, etc.), error/binary parser settings, etc. This might be a sub-set of the current MBS schema or something similar.

> Another key point here is "ISO/gcc-based C/C++".  This is a crappy name, but I
> think you know what I mean.  There are a great many of us who all have special,
> embedded devices that have some form of C-support that contains unique
> customizations or is a partial implementation of standard C.  And 99% of us
> embedded wackos don't care about C++. ;O)  So, right away there is a fairly
> large barrier to entry in terms of extra cruft we just don't want or need
> showing up in the IDE.  In terms of a shipping product, it is very confusing to
> users when they want to develop in assembly code, and they have to create a
> C/C++ project.  I think if you are going to go through the effort of factoring
> out C/C++ as a language, you need to also make the distinction that not all
> implementations of C are created equal.  Ideally, I would like to use the
> generic, compiled language support and not include any specific
> C-implementation support (and definitely not C++) except my own in my product
> (let's call it "My-C").  If my users want to install the gcc-based C/C++ tools
> that CDT offers then they would be free to do so.  However, then we'll get into
> issues with file extensions, and what editor to use, etc.  But that is another,
> separate hurdle.
We are actually working in this direction. There is the org.eclipse.cdt.core.language extension point that is intended to allow custom language integration.
There will be the mechanism that would allow selecting which language variant is to be used for the particular resource types, e.g. it will be possible to associate a language with your tool definition, so there should not be any file extension problems since the language will be automatically chosen based upon the tool-chain/tools being used.

> 2) The project wizards are based on C/C++ and make.  This has been discussed
> and you are talking about creating a "compiled language" project wizard that is
> more intuitve for other languages.  I'm all for this.  The problem I have now
> (correct me if I am wrong) is that while you can contribute all sorts of nice
> toolchains and your own wizard pages, and even your own build file generators -
> you still need to write your own wizard anyway because you don't want to start
> a new "Managed Make C/C++ Project" for an assembly code project that uses Ant
> to build itself.
We are going to introduce some common general framework that would allow more easy and consistent integration in the project creation UI,
e.g. some general wizard templates that could be sub-classed, custom wizard pages/tabs, etc.

> 3) You are pretty hard-wired to make or some other *external* tool driver. 
> This is still true in the UI document attached to this bug.  What if you don't
> want to use make?  OK, well right now you can specify your own builder in place
> of it, and you can even specify your own build file generator - but why do we
> have to be so tied to the name of an "external tool" that must magically be on
> your PATH?  What if we wanted to write our own build tool in Java, or perhaps
> hook into something through JNI?  Heck, you could even use the built-in Ant
> facilities in Eclipse to build your stuff with Ant - which uses an internal
> AntRunner class rather than calling Ant from the command line.  I think you
> need more flexibility here.  I think you should be able to specify through
> extension points either a "spawned process" type of model like you have now, or
> an API-based model where you could provide something in Java (ideally through
> the traditional Eclipse incremental project builder infrastructure), or even a
> scripting language for that matter.  
> I think some of you are way ahead of me here anyway because you have started on
> an internal builder.  I'll tell you - if it works everyone will use it.  When
> is the last time you ran javac?  ;o)  99% of the people using managed make are
> doing it because writing build files and running compilers sucks.  Just build
> my files using the settings I have specified!  ;o)  And as soon as you get into
> spawning an external process, you have a large disconnect in terms of IDE
> integration.  Not only is error detection and reporting a pain (parsing console
> output), there are platform independence issues you need to worry about,
> incremental builds are more difficult, etc.
> Anyway, this is not new to anyone reading this and it is not an easy problem to
> solve.  Personally I am very curious about your internal builder (I haven't
> been able to get it to work yet...).  Hopefully you're keeping languages other
> than C/C++ in mind when you are writing this thing?
Yes, the builder concept should be abstract enough not to be hard-wired to an external tool, and so any kind of builders should be supported. E.g. it is planed that the Internal Builder to be defined as an MBS Builder object, and Internal Builder enabling/disabling will be implemented as the Builder substitution functionality.
As for now, the Internal Builder is available in the “Experimental” mode and you can enable/disable it via a “Enable Internal Builder” check-box of the “Builde Settings” tab of the “C/C++ Build” project property page.
In case it does not work for you, I would be glad if you could comment on this. There is currently a bug#135241 opened that is used to monitor and discuss the Internal Builder problems and requirements. Feel free to comment to that bugzilla or create a new one.

> Anyway - I think that is it for now in terms of roadblocks.  I'll move on to
> suggestions for the uber-future now. ;o)
> 
> a) Is there any reason why we couldn't have a database of installed "tools" and
> allow the user to mix and match to create their own toolchain at runtime, and
> then somehow export/persist this to this "binary integrations" format, and use
> them as future defaults in a project wizard?  Allow me to explain.
> 
> Let's say you provide a standalone tool that processes an .exe into something
> else - a post-processing application if you will that has a few command line
> args, takes an input file and generates an output file.  You also have a
> standard compiler and linker.  Well, maybe a user would like to set up a
> project template for future use in the new project wizard that allowed him to
> compile -> link -> postprocess, without having to manually set up an external
> tool builder to do the post-processing.
> Here is another use-case.  What if - instead of requiring that a project always
> produce an .exe a user could set up a project with a toolchain that only
> compiled to object code (no linking).  Then that user could set up many
> projects that created relocatable object files, and a final project that
> referenced these projects and linked all of the modules together into a final
> executable.  A little obscure maybe - but are there any other use-cases for
> this sort of thing?  Basically what I am envisioning is setting up a custom,
> ordered list of tools into your own custom toolchain - but at runtime rather
> than defining this through extension points.  You could then use it as the
> basis for a new project in the future.  This could become useful for hybrid
> projects in the future as multi-core development becomes the standard. 
> Basically you can make your own project template with a custom toolchain made
> by chaining a few existing toolchains together.  I guess you could always
> ultimately do this through extension points...
It is actually already possible to implement things like this programmatically in MBS, i.e. creating new tool-chain, adding/removing tools, etc.
We are going to introduce a UI to allow users do tool-chain modification (i.e. tool-chain substitution, adding/removing/substituting tools/builder, etc.). 
There are some problems to be solved from the mbs.core side as well, e.g. currently MBS tool definitions of one and the same tool used in different configuration/project types are treated as different tools, e.g. there is a gcc compiler tool defined in each Debug/Release configuration of each project type. All these MBS tools should be treated as one and the same “real” tool and we should define a logic for calculating a set of “real” tools based upon the set of MBS tool definitions.

> b) Any implementation you come up with should be callable in headless mode so
> that you can build your projects without the UI in *exactly* the same way you
> can from within the UI, using the same builder(s).
You can currently build the projects that use external builder from the command line by invoking that builder and passing it the proper makefile.
As for the Internal Builder and other possible non-external tool builders, I agree there should be the way to launch them from the command line.
The general way of the project command line build I could think of is that it should be possible to start eclipse with some command line options that would specify what project are to be built, and having eclipse building those projects and shutting down after building.
I don’t know whether or not eclipse provides such a capability currently.

> c) Similar to the point I made previously - why focus on all of your tools
> spawning external tools?  What if you wanted to write your compiler or linker,
> or pre/post-processor in Java?  The MBS right now is pretty much built around
> the model of spawning some external executable.  I think the choice of running
> an API-based tool that could be written in Java (or another language callable
> from Java) would be a very useful option as well.
The external tool concept is one of the key-points in MBS since the MBS build is actually constructed from generating a set of command lines based upon definitions of external tool commands and options.
Allowing to integrate API-based tools seems to require a separate Build System. This is actually one addition argument for not combining the MBS schema with the core.

> d) When thinking about sub-projects within a project (library-like
> sub-folders), would it be easier to just support splitting this on project
> boundaries instead, and having referenced projects combined with Team Project
> Sets?  I mean, it is nice that you could be able to have a compiler/librarian
> create a .lib in a subfolder of your project and then link it into an
> executable as part of your compiler/linker toolchain - but is it so bad to just
> use two projects in this case?  If it is a CVS-fetching issue maybe some more
> support in the area of Team Project Sets would be enough and more beneficial to
> others as well?
That’s a good question. I think that the mechanism of automatic project build settings adjustment we’re going to have should be sufficient for adjusting the proper build settings in case of one project references another, e.g. in case one project references another one that builds a library, the library path and library reference for that library should be added to the dependent project and propagated to the build system.
And yes, I agree that we should use eclipse-general functionality where possible.

> e) We find it useful to be able to build more than one build configuration at
> once as part of a regular build.  I think you'd still need the concept of a
> single build configuration to be used for include resolution, content assist,
> etc. - but would others find it useful to rebuild all configurations when a
> resource changes?  We do that now to compile multiple versions of code that
> produce slightly different behaviour and our developers seem to use it a lot.
> This implies you could have multiple "active" build configurations (where
> active means it will be built), with a single build configuration used for
> "context" purposes.
There is a bugzilla regarding this, and it should not be difficult to implement this feature.
We may consider this in the design as well.

> I guess the big question for us is what is your target date for the new
> "project model"?  It is definitely something we would be interested in and
> would help out with but currently we are looking at avoiding the CDT once again
> because it lacks exactly what this new "project model" is trying to achieve. 
> It is kind of a catch-22 for us.
As I said, we are planning to introduce the new project model in the 4.0, and I’m planning to post the design in the end of June-beginning of July.
I would be glad for any participation in the design review and implementation since the amount of enhancements and new functionality we could do for the 4.0 depends on the number of resources involved.

Regards,
Mikhail
Comment 32 Mark Melvin CLA 2006-06-23 11:07:59 EDT
Thanks for the detailed response, Mikhail.  I guess my only comment until your design doc comes out (and I am definitely interested in reviewing it and contributing feedback) is about making the MBS not part of a compiled languages core.  I *think* I can agree with you this, but I would qualify that by saying - I'm sure you're going to come up with some kick-butt managed build system.  I'm also quite certain that most people with other compiled languages are going to want a similar system, and if it is entirely bundled with the CDT and tailored for C/C++ support, people will either end up cannibalizing CDT like they are now, or rolling their own (like we are now).  I hope we can find come common bits that we could all leverage (and contribute to).

I personally think it may be of interest to look at leveraging bits of Ant here simply because they have already dealt with launching external processes from Java with command line arguments, and it lends itself to API-based stuff as well (being Java-based).  Additionally, you already have Ant support (non-UI anyway) in the core platform runtime.
Comment 33 Doug Schaefer CLA 2006-06-23 11:42:56 EDT
The good news is that I am working on C# support building on top of the CDT as it is in 3.1. It will use MBS for it's build and I plan in using the CDT's MI support for debug. Plus the DOM and the editor. This will help us confirm or fix where needed to make sure multi-language support is easy.
Comment 34 Mikhail Sennikovsky CLA 2006-06-26 09:46:25 EDT
(In reply to comment #32)
Hi Mark,

I think we should keep the CDT extendible. I値l be really glad if we have the CDT Build System suit everyone needs, but I知 not sure we will be able to do that at least because there are already several different custom Build Systems using CDT and we can not simply ignore them or force them to switch to an play the CDT build system game. On the other hand, in case you have any proposals or enhancements to make to the CDT Build System to fulfill your needs, we are welcome to any proposals and contributions.
Speaking of API-based tools, Ant-based builder, facilities you are mentioning, I agree that the MBS Internal Builder or some other builders can be capable and used for launching API-based tools, I also think that bringing some Ant support to the CDT (e.g. via Ant Builder definition) is a good idea.
So we値l be happy to review, discuss any proposals for enhancing the CDT Build System and include them in the design if needed.

Mikhail
Comment 35 Mikhail Sennikovsky CLA 2006-07-21 10:32:28 EDT
Created attachment 46637 [details]
Initial draft for the CDT New Project Model design

I'm attaching the initial draft of the CDT New Project Model design.
I hope the design could serve as the base and synchronization point for different enhancements planned to be done for leveraging the CDT 4.0 in the areas of better multi-language support, better usability and integrity, etc.

Your comments are highly appreciated.

Thanks,
Mikhail
Comment 36 Markus Schorn CLA 2006-07-31 10:11:35 EDT
Trying to fully understand the proposal I have the following questions:

* Why is there a need for separating the settings into 'Build System Settings' and 'Project-specific Settings'? How are they different? Are 'Build System Settings' not project-specific?

* It is unclear to me who is allowed/expected to make changes to the various settings. Candidates are the Common UI, the Build System or any other 3rd party code. Who is allowed to override whose settings? For instance, if you change the build system (a 'Project-specific setting), do you expect the Build System chosen to overwrite all, some or none of the 'Build System Setttings'?
Comment 37 Mikhail Sennikovsky CLA 2006-07-31 13:30:28 EDT
(In reply to comment #36)
Hi Markus,

Thanks for your questions. My answers are embedded below.

Thanks,
Mikhail

> Trying to fully understand the proposal I have the following questions:
> 
> * Why is there a need for separating the settings into 'Build System Settings'
> and 'Project-specific Settings'? How are they different? Are 'Build System
> Settings' not project-specific?
This is actually a logical distinction. Both models can be treated as one model. The distinction is made to separate the settings handled/held by the Build System and the settings handled/held by the Core Project Settings Model functionality. The design assumes that the Build System would operate (i.e. use for build, provide/consume to/from the core) with the “pure” build settings, i.e. includes, libraries, macros and will not need the “project-specific” information such as reference information, export/import settings information, external SDK information, etc. 
On the other hand, the Core would maintain the export/import information, external SDK information and would calculate and add, e.g. includes contributed with imported projects or external SDKs to the Build System settings. See the example#3 in my comment below.


> * It is unclear to me who is allowed/expected to make changes to the various
> settings. Candidates are the Common UI, the Build System or any other 3rd party
> code. Who is allowed to override whose settings? For instance, if you change
> the build system (a 'Project-specific setting), do you expect the Build System
> chosen to overwrite all, some or none of the 'Build System Setttings'?
The Model Settings could be accessed/changed by various types of clients, etc.:

1. UI can access/modify almost any kind of build settings, e.g.
  a. language settings (“content type<->language” associations)
  b. Includes/macros/libraries settings
  c. Build settings (builder directory, Environment settings, etc.)
The idea is to have the common UI settings that could be used for any type of build system and tool chain integration that would allow a common user experience in adjusting those settings.
Note that as opposed to the current Path Entries framework, the settings added/removed/modified to the Project Settings Model will be propagated to the Build System and thus will be always in synch with the actual Build System settings, e.g. in case of using the Managed Build, modifying includes settings will actually modify the tool INCLUDE option value.

2. Settings can be accessed/modified programmatically. There were several requests raised on the cdt-dev list recently regarding the ability to easily modify project includes, macros, libraries settings without having to go into the Build System details and to have those settings propagated and properly set to the Build System being used

3. Core Project Settings Model functionality may also use the Build System Settings API for automatic adjusting the project Build System settings, e.g. in case some project is set as a prerequisite of another project, the Core Project Settings Model framework would automatically add the settings exported by the dependency project to the Build Settings of the dependent project.

As for changing the Build System, I think this would typically be a read-only setting that would actually maintain the information on what CConfigurationDataProvider (Build System) extension is to be used for the given configuration. It is not planned to provide any UI for this and I expect that this would be upto the Build System integrator how to handle the Build System change and implement UI for performing the Build System change (e.g. by implementing some Wizard). 

Mikhail
Comment 38 Markus Schorn CLA 2006-08-01 10:46:45 EDT
Thanks for your explanations. Still I am a little worried about situations where multiple sources try to modify the very same build setting and thus will have problems respecting what was changed by other sources.
One example for this is the propagation of settings for referenced projects. I think we have to handle changes to the exported settings in the referencing project and also the removal of such a reference. In both cases it will not be obvious how the affected settings have to be modified because they are composed of stuff that was contributed via the propagation and other stuff that came from the project itself. How can you distinguish those?

Another question: The proposal mentions referenced configurations, what is their semantics? When do I have to reference a configuration?

Persisting the settings: I think it is not possible to store all project-settings in the .cproject file. If you are working in a team, this file will be put in a repository and shall only contain the settings that do not have to be changed by team members. The active configuration may frequently be changed by a user and thus should be stored somewhere else.

Also for the reason of sharing the configuration in a team, it is necessary to allow variables in some (which?) settings. At least this will be the case for include paths. The mechanism for resolving variables should be discussed and a method that performs the expansion should probably be provided by the CDT-Core rather than having all clients trying to expand settings with different algorithms. When allowing the use of environment variables it has to be clear whether this is the environment of the IDE or the one specified in the CBuilderSetting. In the first case changing the value of the variable is not convenient, because you are required to take action outside of the IDE + restart the IDE. In the later case, the environment is stored in the .cproject file thus the variables will be the same for all users again :-(. 
So I think it is not sufficient to use environment variables for this purpose, the concept of path variables used for linked resources should probably be incorporated. 


Comment 39 Mikhail Sennikovsky CLA 2006-08-02 07:34:11 EDT
(In reply to comment #38)
Hi Markus,

My answers are embedded below.

Thanks,
Mikhail

> Thanks for your explanations. Still I am a little worried about situations
> where multiple sources try to modify the very same build setting and thus will
> have problems respecting what was changed by other sources.
> One example for this is the propagation of settings for referenced projects. I
> think we have to handle changes to the exported settings in the referencing
> project and also the removal of such a reference. In both cases it will not be
> obvious how the affected settings have to be modified because they are composed
> of stuff that was contributed via the propagation and other stuff that came
> from the project itself. How can you distinguish those?
That’s a good question.
I think we should stick to the approach of having the reference project information (i.e. imported settings) be handled by the core, not the Build System because the mechanism of handling those settings (e.g. updating Build settings once the referenced project info is changed, etc.) seems to be Build System-neutral. I also think we should not make the Build System integration to be overcomplicated by having to implement the referenced project handling.
The following logic of handling the referenced project settings could be used:
1. The Project reference mechanism implemented in the Core would collect and operate with the complete set of settings being imported.
2. Once the imported settings set is changed, the change will be propagated to the Build System. Once the settings are propagated, they become become treatted as a pure Build System settings, i.e. the Build System is responsible for holding/storing them. 
3. The “import” mechanism will internally implement the logic of whether or not the setting is to be added/removed in case the import settings set is changed.
 - In case the new settings is added to the set of imported settings and the Build System already contains this setting (e.g. the library path), the given setting will not be removed from the Build System settings in case it is removed from the imported settings set.
 - In case the new settings is added to the set of imported settings and the Build System does not contain this setting (e.g. the library path), the given setting will be removed from the Build System settings in case it is removed from the imported settings set.
 - The settings that were added to the Build System settings via import mechanism can be explicitly removed from the Build System settings.

> Another question: The proposal mentions referenced configurations, what is
> their semantics? When do I have to reference a configuration?
Since we now are going to have the per-configuration settings rather than project-wide settings, I think it makes sense to allow configuration references rather than project references.
This may be needed, e.g. for the cross-platform development, e.g. in case we have projects “a” and “b” each of those containing configuration1 used to build for x86 and configuration2 used to build for ia64, the user might want that the configuration1 of project “a” reference exactly the configuration1 of project “b” and the configuration2 of the project “a” reference configuration2 of project “b”.
That is why I think we should allow referencing the particular configuration in a given project. It will be also possible to reference a project. The project reference would mean that the active configuration of that project is being referenced.

> Persisting the settings: I think it is not possible to store all
> project-settings in the .cproject file. If you are working in a team, this file
> will be put in a repository and shall only contain the settings that do not
> have to be changed by team members. The active configuration may frequently be
> changed by a user and thus should be stored somewhere else.
I agree. What I actually meant is that we should have all project global settings (i.e. settings that are not to be changed by tam members) to be stored in one .cproject file instead of having separate .cdtproject and .cdtbuild files as done now.
As for the active configuration setting, I think it should be persisted in the same way as done in MBS now, i.e. using project persistent property.
 
> Also for the reason of sharing the configuration in a team, it is necessary to
> allow variables in some (which?) settings. At least this will be the case for
> include paths. The mechanism for resolving variables should be discussed and a
> method that performs the expansion should probably be provided by the CDT-Core
> rather than having all clients trying to expand settings with different
> algorithms. When allowing the use of environment variables it has to be clear
> whether this is the environment of the IDE or the one specified in the
> CBuilderSetting. In the first case changing the value of the variable is not
> convenient, because you are required to take action outside of the IDE +
> restart the IDE. In the later case, the environment is stored in the .cproject
> file thus the variables will be the same for all users again :-(. 
> So I think it is not sufficient to use environment variables for this purpose,
> the concept of path variables used for linked resources should probably be
> incorporated. 
The design proposes to introduce the notion of the Build Variables (Macros) to the core.
The Build Variables (Macros) mechanism is currently implemented in MBS and allows defining and using already-defined build variables (macros) in all options/settings that accept strings. The variables (macros) can be referenced as “${Macro_Name}”.
The Managed Build System currently pre-defines a set of useful macros representing, e.g. configuration name, input/output paths, etc. and also allows tool-integrators and users contribute to the set of Build Variables (macros).
The logic of resolving the Build Variables (Macros) is implemented in MBS.
The design proposes to move the Build Variables (Macros) functionality to the core and allow the Build Variables usage CDT-wide.

Mikhail
Comment 40 Andrew Ferguson CLA 2006-08-08 11:23:12 EDT
hi,

 I have a couple of questions about the new model I was hoping someone could help with

* I'm not sure I'm clear on the relation between IConfiguration and ICConfiguration. Will IConfiguration objects still exist at run-time, but only as a feature of the CDT Build System and have pair-wise equivalent ICConfiguration objects?

i.e. would you ever write code something along the lines of..

CDTBuildSystem bs = ...;
ICConfiguration coreConfig = getProject().getCConfiguration("org.foo.bar.config1");
IConfiguration bsConfig = bs.getConfiguration(coreConfig);

* Will ICConfiguration elements have persistable unique ID's like MBS model elements?

* I'm confused by the passage:

"NOTE: Generally, the CLanguageSetting element presented above is the same as MBS Tool element, and we can actually name it as CToolDescription or CTool. The reason why it is named as CLanguageSetting is because the Project Settings Model represents the build system settings with no assumption regarding how the project built, so generally no tool can stand behind the language description. This is why this element is called CLanguageSetting since it contains the language-specific information applicable to some file type(s)."

does this mean that the CDT Build System will surface information from the ITool model element at this point in the Common Build System settings? i.e. would CLanguageSetting/CToolDescription/CTool approximate information in ITool or literally be the same? I had understood that the buildDefinitions model itself would be only part of the CDT Build System (i.e. not available with other vendor's build systems), but that a subset of information in it would be accessible via the core models?

any help appreciated,
thanks,
Andrew
Comment 41 Mikhail Sennikovsky CLA 2006-08-08 14:43:13 EDT
(In reply to comment #40)
Hi Andrew,

Please see my answers embedded below.

Thanks,
Mikhail

> * I'm not sure I'm clear on the relation between IConfiguration and
> ICConfiguration. Will IConfiguration objects still exist at run-time, but only
> as a feature of the CDT Build System and have pair-wise equivalent
> ICConfiguration objects?
Yes, the IConfiguration elements will exist within the CDT Build System and will maintain the CDT Build System settings for the associated Core ICConfigurationDescription.
Note: according to the design, the ICConfiguration element is the new element of the CModel. We haven’t yet come to the final agreement regarding how the Configuration concept will be presented in the CModel, so I’m not sure whether or not we will have the ICConfiguration element in the CModel.
Generally for accessing the configuration settings from the core the Project Settings Model API (CProjectDescriotion, CConfiguraionDescription, etc.) should be used.

> i.e. would you ever write code something along the lines of..
> 
> CDTBuildSystem bs = ...;
> ICConfiguration coreConfig =
> getProject().getCConfiguration("org.foo.bar.config1");
> IConfiguration bsConfig = bs.getConfiguration(coreConfig);
As mentioned in the design, there would be two possible ways of accessing/modifying the project Build System settings information:
From the Core perspective the Project Settings Model (CProjectDescription ) mechanism would be used.
From the CDT Build System perspective, the Build System API would be used. The CDT Build System will have the Build System Manager (the analog of the ManagedBuildManager currently present in MBS) that would be used for accessing the Build System information including the IConfiguration information.
The Manager will provide API for querying the IConfiguration given the Core ICConfigurationDescription object, as well as accessing IConfiguration given configuration name or configuration ID.

> * Will ICConfiguration elements have persistable unique ID's like MBS model
> elements?
I agree that we may need to maintain the configuration ID in the Core. I will add the ID info to the ICConfigurationDescription.

> * I'm confused by the passage:
> 
> "NOTE: Generally, the CLanguageSetting element presented above is the same as
> MBS Tool element, and we can actually name it as CToolDescription or CTool. The
> reason why it is named as CLanguageSetting is because the Project Settings
> Model represents the build system settings with no assumption regarding how the
> project built, so generally no tool can stand behind the language description.
> This is why this element is called CLanguageSetting since it contains the
> language-specific information applicable to some file type(s)."
> 
> does this mean that the CDT Build System will information from the
> ITool model element at this point in the Common Build System settings? i.e.
> would CLanguageSetting/CToolDescription/CTool approximate information in ITool
> or literally be the same?
The above quoted design note seems to be deprecated.
Actually each ITool element of the CDT Build System may be associated with more than one CLanguageSetting elements in the Core Project Setting Model.
E.g. the tool may accept several types of files of different languages, i.e. the tool definition may contain several InputTypes representing different languages.
In this case there will be one CLanguageSetting for each tool’s InputType representing a specific language.
I will correct that note in the design.

> I had understood that the buildDefinitions model
> itself would be only part of the CDT Build System (i.e. not available with
> other vendor's build systems), but that a subset of information in it would be
> accessible via the core models?
Yes, you are right.

Thanks,
Mikhail
Comment 42 Mikhail Sennikovsky CLA 2006-09-15 17:36:53 EDT
Created attachment 50311 [details]
CDT New Project Model UI design

Please find the CDT New Project Model UI design attached.

Your comments are highly appreciated.

I知 going to make the presentation on the New Project Model UI on the summit, so we should be able to discuss it there.

Mikhail
Comment 43 Dobrin Alexiev CLA 2006-09-19 16:19:50 EDT
The tools chain selection should be hierarchical (tree view) instead of a flat list.  For example, I would like to have a TI tool chain, which expands to show all the processors that we support (C6000, C5000, C2000, ARM7, etc...).  Each one of these could be expanded to show the different versions of the tools that the user has installed on his machine.

To be clear, instead of 
GNU Compiler
Intel Compiler
Intel Compiler 0.9.1
TI C6000 ver 4.5
TI C6000 ver 5.0

you would have 
+ GNU Compiler
+ Intel Compiler
  + 1.0
  + 0.9.1
+ TI
  + C6000
    + 5.0
    + 4.5.
etc...

Comment 44 Mikhail Sennikovsky CLA 2007-01-23 14:21:57 EST
Created attachment 57369 [details]
Initial draft patch for the New Project Model functionality

Hi all,

Please find the initial draft patch for the New Project Model functionality.

I'm going to post short descriptions for the New UI and Core functionality in the next comments.

Regards,
Mikhail
Comment 45 Mikhail Sennikovsky CLA 2007-01-23 14:23:01 EST
Created attachment 57370 [details]
Comments for the core part of the New Project Model patch
Comment 46 Mikhail Sennikovsky CLA 2007-01-23 14:23:38 EST
Created attachment 57371 [details]
Comments for the UI part of the New Project Model patch
Comment 47 David Inglis CLA 2007-01-25 12:30:32 EST
(In reply to comment #44)
> Created an attachment (id=57369) [details]
> Initial draft patch for the New Project Model functionality
> 
> Hi all,
> 
> Please find the initial draft patch for the New Project Model functionality.
> 
> I'm going to post short descriptions for the New UI and Core functionality in
> the next comments.
> 
> Regards,
> Mikhail
> 

Could you attach a new draft patch, it appears cdt.ui has since changed preventing the patch from applying cleanly.

Thanks
Comment 48 Anton Leherbauer CLA 2007-01-26 02:33:52 EST
I could apply with a fuzz factor of 76.
Comment 49 David Inglis CLA 2007-01-26 08:31:54 EST
(In reply to comment #48)
> I could apply with a fuzz factor of 76.
> 

That did the trick, Thanks
Comment 50 David Inglis CLA 2007-01-26 12:06:08 EST
Ok, I applied the patch and right off (since it broke some API we use) I noticed that the IMakeTarget API and its friends (a manager and build info related APIs) have moved into the manage builder plug-in.  So I'm now a little confused as to what the make plug-ins are for since functionality of what was purely Make builder related is now in multiple plug-ins. Is it the intent to move all the code in the make plug-in to the manage builder?  Is so why? As I though the separation of build system  at a plug-in level was a good thing so that ISV's can build custom CDT based IDEs easily ie. Make style builders and the internal builder.
Comment 51 Mikhail Sennikovsky CLA 2007-01-26 13:02:30 EST
(In reply to comment #50)
Hi Dave,

Thanks for the comments. See my response embedded below.

Thanks,
Mikhail

> Ok, I applied the patch and right off (since it broke some API we use) I
> noticed that the IMakeTarget API and its friends (a manager and build info
> related APIs) have moved into the manage builder plug-in.  So I'm now a little
> confused as to what the make plug-ins are for since functionality of what was
> purely Make builder related is now in multiple plug-ins. Is it the intent to
> move all the code in the make plug-in to the manage builder?  Is so why?
Yes, the intention was to create one Build Systems that would incorporate features of current Standard and Managed make builders.
Here are the main advantages of this approach:
1. Having one consistent way of tool/tool-chain integration instead of separate for Standard and Managed make
2. Having one common Program and UI interface 
3. Avoiding maintaining of two separate “branches” of build logic that in most cases can be generalized
4. Leveraging the current standard make with the tool-chain and configuration concepts that allow grouping and automatic adjustment of the project settings
5. Leveraging the current managed make with the IMakeBuilderInfo mechanism that could be used, e.g. for performing “customized” builds (i.e. builds that use the build settings other than those defined in the Active configuration), etc.

The intention was to create this build system based upon the current managedbuilder plug-in and then restructure the sources as needed, e.g. rename the “*.managgedbuilder*.” to something more neutral, e.g. “*.build.*”, restructure sources across several plug-ins, etc. Please see my comment below regarding how I think we could restructure the sources across plug-ins.

> As I
> though the separation of build system  at a plug-in level was a good thing so
> that ISV's can build custom CDT based IDEs easily ie. Make style builders and
> the internal builder.
I agree. I think we could make the following structure of build system plug-ins:
1. The Build System core (org.eclipse.cdt.build.core and ui plug-ins) the plug-ins contain the “core” of the build system, i.e.Build Settings Model (current MBS model, i.e. I[Managed]BuildInfo, IConfiguration, IToolChain, ITool and friends), scanner discovery profile mechanism, and UI interface for this functionality. The plug-ins would depend on/use the org.eclipse.cdt.core and ui plug-ins
2. Gnu Make functionality (org.eclipse.cdt.make.core and ui) plug-ins. That would include all Gnu make-related functionality (makefile parser, makefile editor, IMakeTarget API anf UI, Gnu Makfile generator, etc.). The plug-ins would depend on/use the build.core and build.ui plug-ins.
3. Gnu tool-chain plug-in (the current org.eclipse.cdt.managedbuilder.gnu.ui plug-in) that would contribute the gnu tool-chain-based project types and information and would depend on/use build.core/ui and make.core/ui
What do you think?

Thanks,
Mikhail 
Comment 52 Mikhail Sennikovsky CLA 2007-01-26 13:29:38 EST
Created attachment 57613 [details]
New Project Model patch update

(In reply to comment #47)
> Could you attach a new draft patch, it appears cdt.ui has since changed
> preventing the patch from applying cleanly.
> 
> Thanks
I'm attaching the patch against the latest HEAD.
The patch also contains some updates and bug-fixes.

Mikhail
Comment 53 Mikhail Sennikovsky CLA 2007-01-26 13:48:35 EST
(In reply to comment #52)
> Created an attachment (id=57613) [details]
> New Project Model patch update
> 
> I'm attaching the patch against the latest HEAD.
> The patch also contains some updates and bug-fixes.
> 
> Mikhail
The patch was done agains CVS HEAD on 26 Jan 2007 18:00:18 CVS server time (if I understand the CVS log correctly).
In case there are patch apply conflicts you may try getting the sources from  CVS by the time tag and apply the patch on those sources. 
To get the sources by time tag:
1. select "Check Out As.." in the context menu for the modules/projects in the "CVS Repositories" view
2. Click "Next"
3. Click "Add Date" on the "Select Tag" page, select date 1/26/2007 and time 6:00:18 PM and press "Ok"
4. Click "Finish"

Mikhail
Comment 54 David Inglis CLA 2007-01-26 15:09:58 EST
(In reply to comment #51)
> I agree. I think we could make the following structure of build system
> plug-ins:
> 1. The Build System core (org.eclipse.cdt.build.core and ui plug-ins) the
> plug-ins contain the “core” of the build system, i.e.Build Settings Model
> (current MBS model, i.e. I[Managed]BuildInfo, IConfiguration, IToolChain, ITool
> and friends), scanner discovery profile mechanism, and UI interface for this
> functionality. The plug-ins would depend on/use the org.eclipse.cdt.core and ui
> plug-ins
> 2. Gnu Make functionality (org.eclipse.cdt.make.core and ui) plug-ins. That
> would include all Gnu make-related functionality (makefile parser, makefile
> editor, IMakeTarget API anf UI, Gnu Makfile generator, etc.). The plug-ins
> would depend on/use the build.core and build.ui plug-ins.
> 3. Gnu tool-chain plug-in (the current org.eclipse.cdt.managedbuilder.gnu.ui
> plug-in) that would contribute the gnu tool-chain-based project types and
> information and would depend on/use build.core/ui and make.core/ui
> What do you think?
> 
This sounds good but with the current patch it does not look like thats whats happening (ie. IMakeTarget was moved from make.core).  Also if we are moving APIs I think its best to highlight these and make sure they are really what we want. For example IMakeCommonBuildInfo was an API for driving the makebuilder but now has methods for controlling parallel builds which is a property of the internal builder, not necessarily the external make builder (but could be controlled via command line options if the user knows their make supports it).

Anyways I'm say this cause I (and I'm sure others) would like to see our APIs settle down and be more future proof, these APIs that are moving have already gone though a iteration once where it used to have a bunch of get/set methods for controlling the build and we moved away from that to a pair of get/set attribute methods with constants, that way implementers don't break when a new constant is added and can fail gracefully at runtime.  

Sorry if I'm focusing only on the make.core apis but since it breaks or plug-ins  I want to make sure I understand the breaking changes and what I have to do to move forward when the new project model changes start coming in.  I will be going over the other changes as time go on and hopefully provide you with some feedback, but since there is lots of changes it may take a while...

Thanks
Comment 55 Mikhail Sennikovsky CLA 2007-01-29 06:03:10 EST
(In reply to comment #54)
Hi Dave,

Thanks for you feed-back. Please see my answers embedded below.

Thanks,
Mikhail

> This sounds good but with the current patch it does not look like thats whats
> happening (ie. IMakeTarget was moved from make.core).
I agree with moving the IMakeTarget back to the make.core. I’ll do that right after removing the dependency of the managedbuilder.core (build.core) on make.core, (i.e. migrating the scanner discovery mechanism to the managedbuilder.core (build.core) which is almost done in my workspace currently)
Initially I’ve moved the IMakeTarget API to the managedbuilder.core (build.core) because this API depends on the build.core API, while currently we have the managedbuilder.core (build.core) depending on the make.core which seems incorrect to me.

>  Also if we are moving
> APIs I think its best to highlight these and make sure they are really what we
> want. For example IMakeCommonBuildInfo was an API for driving the makebuilder
> but now has methods for controlling parallel builds which is a property of the
> internal builder, not necessarily the external make builder (but could be
> controlled via command line options if the user knows their make supports it).
The intention of moving the IMakeCommonBuilderInfo API to the managedbuilder.core (build.core) is that we wanted to implement the CDT Build system builder to be driven with the IMakeCommonBuilder-like mechanism. The mechanism is suitable for both “standard” and “managed” builds and could be used, e.g.
1. For building non-active configurations
2. Building set of configurations
3. Building specified make targets
4. Performing any other “customized” builds (i.e. builds that use settings that differ from those of the currently active configuration)
5. We may have the current MBS “Build selected files” functionality to be invoked as a customized build, instead of having the “selected files” build be performed out of the build context, thus it can not be used, e.g. with autobuild enabled

The idea was to generalize the IMakeCommonBuilderInfo mechanism to be builder-neutral, e.g. since the tool-chain/tool/builder concept will be now CDT Build System-wide, we could have, e.g. the “ignore errors” switch to be customized/specified with the builder definition (of the Builder element of the buildDefinitions extension point) instead of being hard-coded to “-k” in the MakeBuilder as its is done in the make.core now. The Builder may or may not support the ”ignore errors” functionality (the "ignore errors" command is specified or not specified with the Builder definition respectively) and thus the IMakeCommonBuilderInfo has the “supportsStopOnError(boolean)” method that specifies that. In case the builder does not support the “ignore errors” functionality, the UI for this setting will be disabled and performing the setStopOnError() call would do nothing.
The way the CDT Build System Builder mechanism works now is that the IBuilder instance can be encrypted and passed to the CDT Build System builder via Map arguments. The CDT Build System builder in its turn “decodes” the Builder and uses its settings for building. In case no builder is specified, the builder of the currently active configuration is used.
The IBuilder interface is currently sub-classed from the IMakeBuilderInfo because this seemed to provide the most backward compatibility with the IMakeBuilderInfo clients. We may consider restructuring the interface hierarchy if you think it's necessary.
The “Parallel Build” setting actually has the same mechanism as the “ignore errors” one described above. Since the set of builder options is usually similar and quite restricted for different builders, we thought that we could just add several most common of them to the IBuilder instead of having some generalized and neutralized mechanism as tool options. This currently allows having the common Builder Settings UI (see the Builder Settings tab of the “Build Settings” property page)

> 
> Anyways I'm say this cause I (and I'm sure others) would like to see our APIs
> settle down and be more future proof, these APIs that are moving have already
> gone though a iteration once where it used to have a bunch of get/set methods
> for controlling the build and we moved away from that to a pair of get/set
> attribute methods with constants, that way implementers don't break when a new
> constant is added and can fail gracefully at runtime.
We are not going to break the get/setBuildAttriobute API and it is supported now, although most new UI and Core code currently uses direct get/set methods rather than build attributes.
Actually I didn’t think that the IMakeCommonBuilderInfo is intended to have custom implementations. Do you think it will be necessary in case of having the IMakeCommonBuilderInfo be created based on the Builder definition (of the buildDefinitions extension point)? When/how it is going to be used?
If we decide that supporting custom implementations it would be a hard change to enable this.

Thanks,
Mikhail
Comment 56 Mikhail Sennikovsky CLA 2007-01-29 09:23:41 EST
(In reply to comment #55)
Hi Dave,

> IMakeCommonBuilderInfo be created based on the Builder definition (of the
> buildDefinitions extension point)? When/how it is going to be used?
> If we decide that supporting custom implementations it would be a hard change
> to enable this.
Sorry, what I ment to say is that in case we decide that supporting custom builder info implementation is necessary it would NOT be a hard change to enable this.

Once we come to the final agreement about the make builder info and IMakeTarget details I'll provide a new patch with the updated functionality.

Regards,
Mikhail
Comment 57 Doug Schaefer CLA 2007-01-29 09:52:17 EST
I guess one point I want to make, and I'm not sure whether you already handle this or not, is that there will be many builders besides standard and managed. At QNX we have our own builder. Other vendors have also recently mentioned to me they have, or are even creating their own builder.

Unfortunately, we haven't won the world over to managed make so we need to continue to support these other builders and maintain the functionality of the CDT core. At the moment, it appears the new model has created a lot of work for our builder, and my fear is that this will disrupt the others as well.
Comment 58 Mikhail Sennikovsky CLA 2007-01-29 10:11:44 EST
(In reply to comment #57)
Hi Doug,

Thanks for your comments.

> I guess one point I want to make, and I'm not sure whether you already handle
> this or not, is that there will be many builders besides standard and managed.
> At QNX we have our own builder. Other vendors have also recently mentioned to
> me they have, or are even creating their own builder.
I guess this is the answer to the question I raised sometime ago regarding the old API support (Path Entries and COwner mechanisms).
The current patch does not contain this support, but it apears that we will have to keep the old API and extension mechanism for the backward compatibility.
The current plan is to have old mechanisms as a wrappers around the new ones.
Although clients/users of the old API will not be able to use advantages of the New Project Model, e.g. multi-language support (e.g. language specific includes/macros settings) and configurations. 

Regards,
Mikhail
Comment 59 Doug Schaefer CLA 2007-01-29 10:32:58 EST
> The current plan is to have old mechanisms as a wrappers around the new ones.
> Although clients/users of the old API will not be able to use advantages of the
> New Project Model, e.g. multi-language support (e.g. language specific
> includes/macros settings) and configurations. 

Excellent! Thanks Mikhail.
Comment 60 Andrew Ferguson CLA 2007-02-05 13:13:27 EST
hi,

 I've just been taking a look at references between ICConfigurationDescription's. I'm wondering how this interacts with the platform's idea of IProject level references - is it a replacement? if so would we want/be-able to disable the corresponding (Platform contributed) Project References pane in the project properties ui?

thanks,
Andrew
Comment 61 Mikhail Sennikovsky CLA 2007-02-06 15:08:40 EST
(In reply to comment #60)
Hi Andrew,

>  I've just been taking a look at references between
> ICConfigurationDescription's. I'm wondering how this interacts with the
> platform's idea of IProject level references - is it a replacement? 
The configuration reference mechanism is not linked with platform project references mechanism currently. 
The mechanism is not a complete replacement of the project references one. As I understand the Project references is used primarily to specify/define the project build order, while Configuration reference mechanism is intended to be used with the Core exporting/importing settings mechanism at one hand and by the Build System during the build process (e.g. the Managed Build System uses this information to refer to the configuration dependencies in the generated makefile).
It might be reasonable to implement the “synchronization” of the CDT configuration references and platform project references, e.g. when the configuration references settings for the currently active configuration are changed in order to make projects be built in the order according to their inter-dependencies, I will add this functionality to the config reference mechanism. Note that in case we have a config settings-to project settings symchronization mechanism, we may have a problem with synchronizing UI settings between Project and Configuration pages since they are completely independent (e.g. user goes to “Configuration References” page, changes settings there, and then goes to the “Project References” page and sees no changes to project references in it).

> if so would
> we want/be-able to disable the corresponding (Platform contributed) Project
> References pane in the project properties ui?
It seems that that we can not disable the project references page since it is contributed by platform to all projects. I think we may keep the Project References page even if the Configuration reference mechanism becomes a complete replacement for the project reference mechanism someday (e.g. as PDE does). 

Thanks,
Mikhail
Comment 62 Mikhail Sennikovsky CLA 2007-02-06 16:37:34 EST
Created attachment 58383 [details]
New Project Model update v0.3

Hi all, 

I'm attaching the updated patch against the latest HEAD.

The patch contains updates based upon feed-back from Dave Inglis and bug-fixes/enhancements.
Note: I have not renamed the managedbuilder.core/ui plug-ins to build.core/ui, since at this stage this should provide a better backward compatibility with the current managedbuilder integrations. (see the comment on backward compatibility below).

I’ve also updated the buildDefinitions schema to reflect new elements added to the model.
Both standard and managed make projects of old style are supported currently and automatically converted to the new format.

I’m going to provide another updated patch by the end of this week.

Also here is some update on backward compatibility:

CDT core: we are going to make the cdt.core fully backward compatible, i.e. old API and old extensions will be supported and usable, so there should be no problems for all custom build system integrations to use the new core via old interfaces.

CDT build: we can make the new build system fully backward compatible as well. The question is how deep do we want to go with this?
1. We can keep all extension points (i.e. although the scanner discovery mechanism (along with the discovery profile extension point) has now moved from make.core to the managedbuilder.core (build.core), we can still have the scanner discovery extension-point defined in make.core and make the scanner discovery mechanism work with extensions contributed to both extension points.
In case we are going to rename the managedbuilder.core to build.core, we can still have the managedbuilder.core backward compatibility plug-in that would represent/mirror the managedbuilder. extension points moved to build.core/ui plug-ins.
2. We can also keep all the current public API (of both “managed” and “standard” builders) and implement them as wrappers around the new functionality, e.g. MakeCorePlugin can still have an API for creating the IMakeBuilderInfos that would actually wrap the corresponding functionality presented in the managedbuilder.core (build.core) now.
Also in case we remove the managedbuilder.core API to build.core we can still have a managedbuilder.core public API presented in the managedbuilder.core backward compatibility plug-in and implement it as wrappers around the buld.core API.
Thoughts?

Thanks,
Mikhail
Comment 63 Andrew Ferguson CLA 2007-02-07 10:02:17 EST
(In reply to comment #61)

hi Mikhail,

 thanks for the quick reply, I think I get it now.

I'd like to propose
	(a) attempt to remove the project references panel for CDT projects. I've tested out a patch to the org.eclipse.ide.ui plugin that enables this via setting a project persistent property.
	(b) unset all CDT project references. On active configuration change, or on configuration reference change via the properties UI, call IProjectDescription.setDynamicReferences* to setup project references. That is, the project level references would always represent the correct project build ordering with respect to the active configuration's configuration dependencies.

The reason I'm interested in making sure we're on the same page here is that the current IndexFactory implementation uses IProject level references to build an index spanning several projects. If we can agree that IProject level dependencies are derived information from the current active configuration, then I'd like to also propose (and implement..) moving the inter-project index model to use IConfigurationDescription dependencies rather the IProject dependencies - this would result in the same behaviour right now, but the API would be better suited should the index contain configuration-specific data in the future.

If you're happy, I'll raise a bug on the platform ui for the enhancement to conditionally disable the project references ui page.

thanks,
Andrew
*dynamic here means not written to project meta data as far as I can tell - i.e. it will not go into source control.
Comment 64 Mikhail Sennikovsky CLA 2007-02-07 11:59:49 EST
(In reply to comment #63)
Hi Andrew,

Please see my answers embedded below.

Thanks, 
Mikhail

> I'd like to propose
>         (a) attempt to remove the project references panel for CDT projects.
> I've tested out a patch to the org.eclipse.ide.ui plugin that enables this via
> setting a project persistent property.
I’m OK with this. The only thing we should consider is referencing non-CDT projects, currently the configuration reference UI displays/operates with only CDT projects that have the notion of the configuration, etc.
But generally users might want to setup references for some non-CDT projects. Do you think this could be needed for some reason?
We could allow the configuration referencing mechanism to refer to all projects including non-CDT projects as well. The “configuration references” UI will display non-CDT projects as well and non-CDT project entries will not contain configuration children.
What do you think?

>         (b) unset all CDT project references. On active configuration change,
> or on configuration reference change via the properties UI, call
> IProjectDescription.setDynamicReferences* to setup project references. That is,
> the project level references would always represent the correct project build
> ordering with respect to the active configuration's configuration dependencies.
I’m OK with this.

> If you're happy, I'll raise a bug on the platform ui for the enhancement to
> conditionally disable the project references ui page.
Yes, please go ahead and raise a bug for this.

Thanks,
Mikhail
Comment 65 Andrew Ferguson CLA 2007-02-07 12:40:25 EST
(In reply to comment #64)

hi Mikhail,

> But generally users might want to setup references for some non-CDT projects.
> Do you think this could be needed for some reason?

It sounds like something someone might want to do (I can't think of a specific use right now though)

> We could allow the configuration referencing mechanism to refer to all projects
> including non-CDT projects as well. The “configuration references” UI will
> display non-CDT projects as well and non-CDT project entries will not contain
> configuration children.
> What do you think?

This makes sense to me, if it confuses users we could have an option in the ui to filter non-CDT projects.

In the opposite direction I could see people having a Java project that depends on a CDT project to generate a native library used via JNI. As JDT is not IConfigurationDescription aware, I guess would have to mean its dependent on the active configuration.

> Yes, please go ahead and raise a bug for this.

I've raised:
   https://bugs.eclipse.org/bugs/show_bug.cgi?id=173302

thanks,
Andrew
Comment 66 Doug Schaefer CLA 2007-02-07 14:56:14 EST
(In reply to comment #65)
> > But generally users might want to setup references for some non-CDT projects.
> > Do you think this could be needed for some reason?
> It sounds like something someone might want to do (I can't think of a specific
> use right now though)

The project references controls build order. One example, a CDT project that uses javah to generate header files from Java class files. The JDT project needs to be built before the CDT project and the project references helps control that.
Comment 67 Walter Brunauer CLA 2007-02-08 03:45:30 EST
(In reply to comment #66)

> The project references controls build order. One example, a CDT project that
> uses javah to generate header files from Java class files. The JDT project
> needs to be built before the CDT project and the project references helps
> control that.

I could add other examples, but more generally spoken, I don't think hiding away or removing platform functionality in general would be a valid approach. And I guess others are depending on this (or other platform functionality), too (including me). Of course, if something is specific to or origins from CDT projects or CDT build, that's a different story, but IMO its not good practice to take control over things which CDT does not own, especially if it is known that clients may not use everything provided by CDT, but rely on the platform as well.
Comment 68 Andrew Ferguson CLA 2007-02-08 06:46:31 EST
I now see a problem in the logic of comment #63 (b)

The idea is that configuration level references would be used to set project level references, using the active configuration as a point to start calculating the order. This fails when you have active configurations set on different projects, that would produce different orders. For example,

Project1            Project2
 ConfA   ---------->  ConfC
 ConfB   <----------  ConfD

where both ConfA, and ConfD are active configurations in their respective projects.

Although in this situation, the user can't manually set an order that won't potentially fail either - it doesn't help choose which of ConfA, or ConfD would be used to automatically set up project level references.

I can't see a way around this problem
Comment 69 Mikhail Sennikovsky CLA 2007-02-08 08:29:01 EST
(In reply to comment #67)
Hi Walter and all,

> I could add other examples, but more generally spoken, I don't think hiding
> away or removing platform functionality in general would be a valid approach.
> And I guess others are depending on this (or other platform functionality), too
> (including me). Of course, if something is specific to or origins from CDT
> projects or CDT build, that's a different story, but IMO its not good practice
> to take control over things which CDT does not own, especially if it is known
> that clients may not use everything provided by CDT, but rely on the platform
> as well.
The interaction of the CDT and Platform project references settings is a very interesting question actually. 
If you take a look at how this problem is handled in CDT now, you may find inconsistencies in the way it works now.
Currently the Standard Make projects have two “Project references” pages: one contributed by the platform and another one contributed by CDT (&#1057;/&#1057;++ Project Paths->Projects).
The contents of those pages is not synchronized with each other, although CDT updates the platform project references settings according to the CDT references settings, i.e. adds references projects to the platform settings if they exist in CDT settings. On the other hand if you remove the project reference from the platform page the change won’t be propagated to the CDT preference settings, more than that the platform setting will be restored back since it remains in the in the CDT reference settings.
So in case we want to keep the link between the CDT and platform references settings it might be good to have one UI element for viewing/editing those settings.
Another alternative that could be considered is not to link the CDT and platform project reference settings.

Mikhail 
Comment 70 Mikhail Sennikovsky CLA 2007-02-08 08:35:51 EST
(In reply to comment #68)
Hi Andrew,

> I now see a problem in the logic of comment #63 (b)
> 
> The idea is that configuration level references would be used to set project
> level references, using the active configuration as a point to start
> calculating the order. This fails when you have active configurations set on
> different projects, that would produce different orders. For example,
> 
> Project1            Project2
>  ConfA   ---------->  ConfC
>  ConfB   <----------  ConfD
> 
> where both ConfA, and ConfD are active configurations in their respective
> projects.
Do you think the “cyclic” configuration/project dependencies could be useful/needed for some reason?
If we decide they are not necessary, we could not disallow such configuration dependencies.

Mikhail
Comment 71 Walter Brunauer CLA 2007-02-08 08:43:43 EST
(In reply to comment #69 and comment #70)

Mikhail,

this is what I think would be ideal:

- project references should either be used from the platform or at least kept fully synchronized in both directions
- cyclic project references should not be allowed, because it would be much harder to manage these in different areas (e.g. to visualize logical project structures, on build runs, etc.)

my 2 € cents only,

Walter
Comment 72 Andrew Ferguson CLA 2007-02-08 09:16:18 EST
(In reply to comment #70 and #71)

[Walter]
> - cyclic project references should not be allowed, because it would be much
> harder to manage these in different areas (e.g. to visualize logical project
> structures, on build runs, etc.)

[Mikhail]
> Do you think the “cyclic” configuration/project dependencies could be
> useful/needed for some reason?

not at all :) - neither IProject or IConfigurationDescription level logic currently prevents them from being created, so I was looking to preserve this.

If we can do without this, my instinct is still towards the single UI (for CDT projects) which controls both levels.
Comment 73 Mikhail Sennikovsky CLA 2007-02-09 04:42:45 EST
(In reply to comment #65)
Hi Andrew,

I took a look at your patch for disabling the platform references page in bug# 173302 and I'm not sure that this approach is suitable for our needs, in particular when are we going to set the persistent property in case we want to disable the page. As far as I know, the project persistent properties information is held outside project directory, so this info will be lost in case you import the project into a new workspace, e.g. via CVS. So we can not simply set this property on project creation. We could implement the property update mechanism, although this still won’t handle all the cases, e.g. when the project is imported into workspace when the cdt.core plug-in is not loaded. In this case the platform page will be visible until the core plug-in is started.

Mikhail
Comment 74 Andrew Ferguson CLA 2007-02-09 06:12:05 EST
(In reply to comment #73)
hi Mikhail,

 sorry, yes you are right. I'll take a look at this (I'll comment in the 173302). I need to get patches in for the index-related work, so this might be delayed.

I've one more build scenario I'd like to check

Project1     Project2       Project3
  ConfA -----> ConfB
               ConfC ------> ConfD

where ConfA and ConfC are active. Would both ConfB and ConfC be built? (If ConfA depends on artifacts built in ConfB, then it would fail if only the active configuration is built).

thanks,
Andrew
Comment 75 Mikhail Sennikovsky CLA 2007-02-09 14:22:27 EST
Created attachment 58685 [details]
New Project Model update v0.4

Hi all,

I'm attaching the updated patch against the latest HEAD.
All major functionality is implemented and working.

We are working on the backward compatibility support and bug-fixing now. 

Mikhail
Comment 76 Anton Leherbauer CLA 2007-02-13 09:27:09 EST
Seems like the patch is not complete. I get build errors in the packages
org.eclipse.cdt.managedbuilder.internal.ui
org.eclipse.cdt.managedbuilder.ui.properties

The patch does not contain any diff for those packages, therefore I am assuming it is incomplete.
Comment 77 Mikhail Sennikovsky CLA 2007-02-13 11:26:38 EST
(In reply to comment #76)
> Seems like the patch is not complete. I get build errors in the packages
> org.eclipse.cdt.managedbuilder.internal.ui
> org.eclipse.cdt.managedbuilder.ui.properties
> 
> The patch does not contain any diff for those packages, therefore I am assuming
> it is incomplete.
Ok, I will update the patch later today or tomorrow morning.

Mikhail

Comment 78 Mikhail Sennikovsky CLA 2007-02-13 13:51:51 EST
Created attachment 58876 [details]
New Project Model patch agains the latest HEAD

Hi Anton and all,

I'm attaching the patch against the latest HEAD. The patch seems to work fine for me. Please let me know in case there are still any problems with applying it.

Thanks,
Mikhail
Comment 79 Mikhail Sennikovsky CLA 2007-02-13 16:22:34 EST
Created attachment 58897 [details]
gif icon files

Since diff/patch do not seem to always handle binaries correctly, I知 attaching the zipped .gif files. The attached zip file contains gif files that are to be put in the managedbuilder.ui and cdt.ui projects in the same way as they are presented in the zip.

Mikhail
Comment 80 Mikhail Sennikovsky CLA 2007-02-13 16:36:44 EST
(In reply to comment #78)
> Created an attachment (id=58876) [details]
> New Project Model patch agains the latest HEAD
> 
> Hi Anton and all,
> 
> I'm attaching the patch against the latest HEAD. The patch seems to work fine
> for me. Please let me know in case there are still any problems with applying
> it.
Note: as a part of backward compatibility support implementation, the latest patch contains the ICDescriptor functionality updated to use the new ICProjectDescription framework. Since the updated functionality is not yet thoroughly tested, the patch may not be stable enough. We are planning to complete the main part of backward compatibility support implementation by the end of this week and provide an update with backward compatibility enabled.

Mikhail
Comment 81 Mikhail Sennikovsky CLA 2007-02-16 17:34:55 EST
Created attachment 59205 [details]
New Project Model update with Core backward compatibility support implemented

Hi All,

I'm attaching the New Project Model update with basic core backward compatibility support implemented. The main idea of core backward compatibility is that PathEntry/COwner and new CProjectDescription extensions and API can be used on equal terms in CDT.
There is no API breakage so all existing custom Build System integrations should remain working (of course some more debugging of backward compatibility functionality is needed to make it handle all the possible cases :). I made some testing of using the old *.make.* and *.managedbuilder.* plug-ins (old “standard” and “managed” make builders) with the new cdt.core and cdt.ui and they seem to work with the new core.

Mikhail
Comment 82 Mikhail Sennikovsky CLA 2007-02-16 17:37:28 EST
Created attachment 59206 [details]
gif icon files to the latest patch

I'm also attaching gifs.. 

The attached zip file contains gif files that are to be
put in the managedbuilder.ui and cdt.ui projects in the same way as they are
presented in the zip.

Mikhail
Comment 83 Mikhail Sennikovsky CLA 2007-02-19 08:01:02 EST
(In reply to comment #74)
Hi Andrew,

> I've one more build scenario I'd like to check
> 
> Project1     Project2       Project3
>   ConfA -----> ConfB
>                ConfC ------> ConfD
> 
> where ConfA and ConfC are active. Would both ConfB and ConfC be built? (If
> ConfA depends on artifacts built in ConfB, then it would fail if only the
> active configuration is built).
The mechanism/logic of handling such cases during the build is fully laid on the Build System currently since eclipse builder framework knows nothing about configurations.
Within the CDT Build System it is assumed currently that the reference logic is handled/presented in the makefiles used, i.e. in case the makefile generation is on (“Managed” Build) the Gnu makefile generator inserts the dependency information in the generated makefiles, i.e. makes the build target of the current project/configuration depend on the build target of dependency configuration. In case the makefile generation is off (“Standard” Build) it is assumed that the dependency information is presented within makefiles being used.
We can easily enhance the build functionality and make the logic of handling such cases reside within the Build System builder.
I can see two possible approaches of handling non-active configuration references builds:
1. When the project build is requested the Build System builder detects the list of configurations of the current projects that are referred by other projects and builds all those configurations and the active configuration. The non-active configuration build is performed in the “background” mode, i.e. the build console output is not displayed/parsed during the build. E.g. in your above example when the Project2 build is requested both ConfB and ConfC are built. The ConfB build is performed in the “background” mode.
2. When the project build is requested the Build System builder detects the list of configurations of the other projects that are referred by the configuration being built and builds all those configurations in case they were not already built during the current build operation. The non-active configuration build is performed in the “background” mode, i.e. the build console output is not displayed/parsed during the build. E.g. in your above example the ConfB is built in the “background” mode during the Project1 build.

Since the current CDT Build system builder is configuration-based internally and supports multi-configuration builds, it won’t be hard to implement any of the above approaches. 
I tend to think that the approach#2 is better since with the first approach there might be cases when the non-active configuration build is not needed (e.g. in case the Incremental build is performed on the project, we may not want to build its non active configurations referred by other projects since those projects are not going to be built), and it might be difficult to distinguish what configurations to build in this case, e.g. in your above example when the incremental build is requested for the Project2 the ConfB build is not needed since the Project1 is not going to be built, but when the Project2 build is caused by the incremental build request for the Project1, the ConfB of the Project2 should be built.
What do you think?

Mikhail
Comment 84 Mikhail Sennikovsky CLA 2007-02-20 12:51:58 EST
The New Project Model functionality is now committed to the CVS repository.

Thanks,
Mikhail

Comment 85 David Inglis CLA 2007-02-20 14:26:05 EST
(In reply to comment #81)
> There is no API breakage so all existing custom Build System integrations
> should remain working (of course some more debugging of backward compatibility
> functionality is needed to make it handle all the possible cases :). I made
> some testing of using the old *.make.* and *.managedbuilder.* plug-ins (old
> “standard” and “managed” make builders) with the new cdt.core and cdt.ui and
> they seem to work with the new core.
> 
After adding new dependencies to the managebuilder plugin, and re-organizing imports (both which I consider API breakage) I'm still missing createBuildInfo API which used to be in MakeCorePlugin, MakeBuilder, and SettingsBlock.


Comment 86 Mikhail Sennikovsky CLA 2007-02-21 08:59:20 EST
(In reply to comment #85)
Hi Dave,

Currently we have backward compatibility implemented for the cdt Core only (i.e. cdt.core and cdt.ui plug-ins).
I raised the question on the build system API backward compatibility sometime ago (see Comment #62) but got no answer. That is why we did not concentrate on the Build System API/extension backward compatibility so far.
The main breakage we have currently is with the make.core API plug-ins since some of its functionality has been moved to the managedbuilder.core.
I'm going to work on the build system backward compatibility next week. Please take a look at the Comment #62 and provide your feed-back on how deep the build system backward compatibility is required for you.

Mikhail
Comment 87 David Inglis CLA 2007-02-21 10:08:01 EST
Sorry, i read; "CDT build: we can make the new build system fully backward compatible as well." did not read into detail of how that would be accomplished.  Anyway, here are my comments, on how std make compatibility could be handled;

Leave the make builder (core and ui) as it is in 3.x and deprecate all the corresponding classes which would indicate no future change with them, then implement the new standard make build systems parallel to it with all new projects created using it, then create a migration API (and UI actions) to take existing standard make projects and translate then to the new style. This removes the need for any "wrapper" classes and preserves compatibility. (technically make.core/ui could be use from 3.x as is. if core API compatibility has been preserved and would be a good test for that).

This now allows existing integrations to continue to work without change (binary compatible) and with some kind of migration guide its clear how one would integrate to use the new build system, and can use the standard make project as an example of doing that migration (I always find code examples the best form of documentation).
Comment 88 Andrew Ferguson CLA 2007-02-21 13:04:34 EST
hi Mikhail,

(In reply to comment #83)
> What do you think?

I much prefer option 2 also. I think option 1 would probably often involve doing more work than the user is interested in (?)

thanks,
Andrew
Comment 89 Mikhail Sennikovsky CLA 2007-02-23 17:59:40 EST
(In reply to comment #87)
Hi Dave,

Sorry for not responding sooner.

That is an interesting approach. 
Here is what I think we can do having the curernt functionality:
 
> Leave the make builder (core and ui) as it is in 3.x and deprecate all the
> corresponding classes which would indicate no future change with them,
Reverse back the make.core and make.ui plug-ins to the state before the New Project Model commit, but disable old New Standard Make project wizards

> then
> implement the new standard make build systems parallel to it with all new
> projects created using it, 
There is no need in creating the “new standard make” build system since the new CDT Build System implemented based upon the managedbuilder plug-ins covers/groups both “Standard” and “Managed” functionality and can be used for the standard make functionality.

>then create a migration API (and UI actions) to take
> existing standard make projects and translate then to the new style.
We actually already have a migration API. The way it works now is that all old-style standard make projects are automatically converted to the new style CDT Build System projects. What we need to do is disable the automatic conversion “on load”. Instead we could have two options for performing old standard make projects conversion:
1. create UI actins (as mentioned  in your comment)
2. ask user whether he/she wants to convert the project once the project is first loaded/imported in the workspace

Also from the MBS backward compatibility point of view to ensure maximum backward compatibility of the managedbuilder plug-ins with existing MBS integrations we might need to do the following:
1. since most of the current MBS integrations use the scanner discovery profile mechanism provided with the make.core plug-in, we might still allow using the (reversed back) make.core discovery profile mechanism. At the same time the managedbuilder.core will maintain its own scanner discovery profile mechanism implementation (i.e. separate extension point, API and implementation) that would be configuration/tool-chain concept-aware (that is actually as done now).
The CDT Build system will detect what profile mechanism to use (i.e. either the make.core or the managedbuilder.core one) and use the appropriate discovery API.

So the following changes to the current managedbuilder.core plug-in will be needed:
1. add dependency to the make.core plug-in (to ensure the make.core discovery profile mechanism works for CDT Build System)
2. rename the current *.make.*.scannerconfig* packages presented in the managedbuilder.core to e.g. *.build.*.scannerconfig* to avoid namespace conflicts with the the same packages/classes of the make.core plug-ins.
3. add the mechanism for detecting and using the appropriate discovery API (i.e. either the make.core or the managedbuilder.core one).

What do you think?

Mikhail
Comment 90 Mikhail Sennikovsky CLA 2007-02-23 18:20:35 EST
(In reply to comment #89)
..having the above approach we will also need to make the IMakeTarget API and UI work for the New CDT Build System (i.e. in the way it works now).
I need to investigate a bit more on this. The possible approaches that come to my mind now are:
1. Use the IMakeTarget API/UI of the make.core/ui plug-ins (Contribute the CDT Build System builder (org.eclipse.cdt.managedbuilder.internal.core.CommonBuilder as Make Target Builder, allow the IMakeTarget management UI of the make.ui plug-in detect the make target builder to be used (i.e. MakeBuilder for old-style "Standard" make projects, CommonBuilder for new-style projects))
2. "clone" the IMakeTarget API and UI to the CDT Build System

Any comments/suggestions are wellcome.

Mikhail
Comment 91 Mikhail Sennikovsky CLA 2007-02-26 05:28:01 EST
(In reply to comment #90)
> (In reply to comment #89)
> ..having the above approach we will also need to make the IMakeTarget API and
> UI work for the New CDT Build System (i.e. in the way it works now).
I thought more deeply and it seems we will be able to use the (reverced back) IMakeTarget API/UI from the cdt.core with the new stile projects without any changes needed since the new CDT Build System Builder (org.eclipse.cdt.managedbuilder.internal.core.CommonBuilder) supports old-stile format of builder arguments.
Also since the CommonBuilder is contributed as a MakeTargetBuilder, the MakeTarget Management UI will be enabled for the new stile projects and the CommonBuilder will be automatically selected as a make target builder for Make Targets being created.

Mikhail
Comment 92 Gerhard Schaber CLA 2007-02-26 09:59:21 EST
Hi!

I am not quite sure if the backward compatibility interfaces and classes are meant to exist for the next release only, or as long term solution.

In any case, we need a scanner discovery that works completely without CDT build, CDT project, and without CDT make and managed make nature--basically like it was in 3.3M5. We can only provide an IProject.

We do have our own project and build system, but we use everthing else from CDT. We integrated the scanner discovery in our build--something similar to what CDT did in MakeBuilder.invokeMake and ScannerConfigNature.initializeDiscoveryOptions. Something similar should be there, and it should be a long-term solution (no deprecated interfaces).

Best regards,

    Gerhard
Comment 93 Mikhail Sennikovsky CLA 2007-02-26 10:17:38 EST
(In reply to comment #92)
Hi Gerhard,

My view on the backward compatibility API concept (for both Core and build) is that it will remain in the future as long as we might need it. I also think we do not need to deprecate the old API since it will remain functional in the CDT on equal terms with the new functionality.

Mikhail
Comment 94 David Inglis CLA 2007-02-27 12:13:25 EST
comments inline.

(In reply to comment #89)
> > Leave the make builder (core and ui) as it is in 3.x and deprecate all the
> > corresponding classes which would indicate no future change with them,
> Reverse back the make.core and make.ui plug-ins to the state before the New
> Project Model commit, but disable old New Standard Make project wizards

This sounds good.

> 
> > then
> > implement the new standard make build systems parallel to it with all new
> > projects created using it, 
> There is no need in creating the “new standard make” build system since the new
> CDT Build System implemented based upon the managedbuilder plug-ins
> covers/groups both “Standard” and “Managed” functionality and can be used for
> the standard make functionality.

Ok

> 
> >then create a migration API (and UI actions) to take
> > existing standard make projects and translate then to the new style.
> We actually already have a migration API. The way it works now is that all
> old-style standard make projects are automatically converted to the new style
> CDT Build System projects. What we need to do is disable the automatic
> conversion “on load”. Instead we could have two options for performing old
> standard make projects conversion:
> 1. create UI actins (as mentioned  in your comment)
> 2. ask user whether he/she wants to convert the project once the project is
> first loaded/imported in the workspace

I thing being a little less in the users face is desirable, at least for me, would could be done is all old project could be marked with a warning problem marker stating the project is out of date an should be converted.  Then have a wizard style conversion which allows batch conversion of the selected project within the workspace, there is currently a wizard in standard make which does this for some older make project from back in CDT2.0 days (I think is was back then don't remember exactly). This allows warnings to be presented to the user so that they are aware that this makes the project not compatibly with previous versions of the CDT,  important since project can be shared in a Team environment, with different version of CDT.  This wizard could be invoked via quick fix action on the problem or from the project menu.  Just my 2 cents.

> 
> Also from the MBS backward compatibility point of view to ensure maximum
> backward compatibility of the managedbuilder plug-ins with existing MBS
> integrations we might need to do the following:
> 1. since most of the current MBS integrations use the scanner discovery profile
> mechanism provided with the make.core plug-in, we might still allow using the
> (reversed back) make.core discovery profile mechanism. At the same time the
> managedbuilder.core will maintain its own scanner discovery profile mechanism
> implementation (i.e. separate extension point, API and implementation) that
> would be configuration/tool-chain concept-aware (that is actually as done now).
> The CDT Build system will detect what profile mechanism to use (i.e. either the
> make.core or the managedbuilder.core one) and use the appropriate discovery
> API.
> 
> So the following changes to the current managedbuilder.core plug-in will be
> needed:
> 1. add dependency to the make.core plug-in (to ensure the make.core discovery
> profile mechanism works for CDT Build System)
> 2. rename the current *.make.*.scannerconfig* packages presented in the
> managedbuilder.core to e.g. *.build.*.scannerconfig* to avoid namespace
> conflicts with the the same packages/classes of the make.core plug-ins.
> 3. add the mechanism for detecting and using the appropriate discovery API
> (i.e. either the make.core or the managedbuilder.core one).
> 
> What do you think?
> 
> Mikhail
> 

Comment 95 Mikhail Sennikovsky CLA 2007-02-28 05:48:55 EST
(In reply to comment #94)
Hi Dave,

Thanks for the reply.

> I thing being a little less in the users face is desirable, at least for me,
> would could be done is all old project could be marked with a warning problem
> marker stating the project is out of date an should be converted.  Then have a
> wizard style conversion which allows batch conversion of the selected project
> within the workspace, there is currently a wizard in standard make which does
> this for some older make project from back in CDT2.0 days (I think is was back
> then don't remember exactly). This allows warnings to be presented to the user
> so that they are aware that this makes the project not compatibly with previous
> versions of the CDT,  important since project can be shared in a Team
> environment, with different version of CDT.  This wizard could be invoked via
> quick fix action on the problem or from the project menu.  Just my 2 cents.
I agree. I'm going to start working on the proposed backward compatibility support now. I'll post a message to this bugzilla and on a dev list once I'm done.

Thanks,
Mikhail
Comment 96 Mikhail Sennikovsky CLA 2007-03-06 14:12:01 EST
Hi All,

The Build System Backward compatibility implementation is now committed to the CVS.
Old-style make projects are currently loaded without conversion (unless they were previously converted).
The "Convert to a C/C++ Make Project" wizard is back and currently it can be used both for converting old-style make projects to the "new style" ones as well as for converting non-CDT projects to the "new style" CDT Make projects.
The "quick fix" wizard invocation is not implemented currently, so the only way to perform conversion for now is to call the "Convert to a C/C++ Make Project" wizard directly, i.e. "New.." -> "Other.." -> C/C++ -> "Convert to a C/C++ Make Project".

Feed-back/comments are welcome.

Mikhail



Comment 97 Doug Schaefer CLA 2007-04-26 22:22:59 EDT
You could probably mark this fixed now, no? We can raise new bugs as we find things that don't work.
Comment 98 Mikhail Sennikovsky CLA 2007-04-27 03:44:48 EDT
Yes, although there are some small parts of functionality proposed in the design document that will not be included in the 4.0, I think this should be marked as FIXED.

I'll prepare the list the unimplemented features (that should really be very small) and will post them to a separate bugzilla.

Marking as FIXED..

Mikhail