Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
RE: [cdt-dev] CDT Conference call - Build Model Docs

Just to toss in some of our thinking on the subject.

Builders
I have to agree with Sam on this point that we should stay as flexible as
possible by dividing up the responsibility for maintaining information and
doing something with it. The build model "knows" about build order
dependencies between resources in the configuration, keeps track of build
flags/environment/macros/etc, knows about any special build steps the user
has specified, and knows what toolchain is associated with the project. 

The toolchain understands the mapping between resource elements and what
tools get invoked to build them. The build model asks the toolchain how to
perform the build steps and how to construct the final build artifact. 

The builder is the front end for making all this happen. It relies on the
information in the build model. We could probably expose this through a
default builder and allow other people to specialize builders to do
different thing. For example, we could construct a builder that generates
makefiles to create the project or a builder that does everything
internally. In both cases, the builder is determining what to do by querying
the build model about what to build and how.

Existing Projects
Gotta be able to support this large group and the current builder is
sufficient for them. We could add some value by creating a nice makefile
editor and/or supporting an "outline" view for the makefile. Minimally, we
would want to allow a user to manually enter compiler defines and include
path information in the configuration, even if they did not use a
fancy-pants builder. This information would make our parsing more accurate,
thus enabling better searching and refactoring. We might even want to
"parse" the makefile for them by calling 'make -p' and pulling out relevant
information (yuck!). Granted, these values are relatively static, but this
could be a pain-point if a user had to do this on an ongoing basis. Given
that, we have also discussed whether we could migrate an exisiting project
to a "managed" makefile. If we did allow CDT to take over managing the
makefile, we thought it would be a one-way migration (i.e. it implied the
user wanted CDT to generate the makefile for them).

Replacing Entire Model
I can see how an ISV might need to extend the model somehow to support their
own special environment, so perhaps we could use the extension point concept
here. The problem is that there will be a dependency between the builders
and the build model, as well as the parser and the build model (includes
path info springs immediately to mind). If we really believe that people
would completely reimplement the build model, then we could make the build
model an interface and supply one possible implementation. The interface is
probably a good idea anyway, from a design perspective.

Error Parsers
In my initial design work, I thought that error parsers could be attached to
the specific tools in the toolchain. So, a toolchain supplier would not only
return a wrapper to invoke a particular tool, they would also attach an
error parser that would return correctly formatted errors for the task list.
Of course, this gets dicey when you throw a generated makefile builder into
the mix, since make returns errors that are a far less detailed. We will
have to think a bit more on the subject.

Sean Evoy
Senior Software Engineer
Rational Software


-----Original Message-----
From: Robb, Sam [mailto:sam.robb@xxxxxxxxxxx]
Sent: Wednesday, February 12, 2003 12:58 PM
To: cdt-dev@xxxxxxxxxxx
Subject: RE: [cdt-dev] CDT Conference call - Build Model Docs 


> After a bird eyes view, I got some questions to help me 
> getting a better understanding.
> Feel free to answer them in different emails.  Thanks again 
> for the API templates.

I'll try to answer all here.  If anything seems unclear, well,
blame it on the cold medicine :-/
 
> == Scenario 1: Existing proprietary builders.
> In a previous email from Sky M. @ rational, he outlined some 
> of the things the build model
> should accomodate and one of them was cooperation with others.
> For example, for the QNX IDE, qnx C/C++ project,  QNX 
> provides a builder
> for the project that has a very good knowledge of the way 
> things should be compile
> (Drivers, resource managers etc ..) and even on where things 
> are in the filesystem.
> 
> Sky's vision below(he could correct me, if I missed the point 8-)

<snip>
 
> It was not clear to me how those other builders can plugin 
> within the CDT.
> Unless the answer is: if you use the IDE you got to use the 
> Build of the CDT.

It seems like there are two ways to do this:

- Provide an abstract builder that is used to implement the default
  CDT builder.  ISVs could use the capabilities of the abstract
  builder to implement a custom builder as desired.

- Provide a default builder that is driven in part or in whole by the
  toolchain.  The default builder would order resource builds to
  fulfill dependencies reported by the toolchain.

A combination of the two would probably work best - develop a
flexible builder that relies on the toolchain to encapsulate
knowledge about how to build things properly, but make it a
straightfoward task to derive a completely different builder.

> == scenario 2: Existing projects
> Lots of talk about usability, where new users hit a brick wall
> i.e. they have to put up with arcane Makefiles etc ..  all true.
> However what about the flip side of the coin, for example we 
> have customers with
> very complex build procedures, some simply defy sanity(for 
> some histerical raisin).
> Not suprising, those customers were quite happy with the 
> default CDT builder 8-)
>
> It would be nice for them be able to use the IDE even if they 
> do not have
> the bell and wistles of the super duper IDE builder.
> It will probably be an incentive to migrate in the long term.
> 
> So we can say to clients: "Oh! you have a monstruous project, 
> a build develop by
> summer students and you are in a time crunch and can not do 
> the conversion? Ok
> we can give you an adapter for the IDE that will spawn your 
> shell scripts, minimal
> and you may loose some features but you can get to work right away."

I don't see any reason at all to drop the current builder - if
nothing else, there are probably hundreds of thousands of projects
(open source and corporate) that have hand-made Makefiles and
build scripts.  I've actually got a handful of these that I have
used the current version of CDT with.

I don't know what the best method of handling this is, though.
In TS2, we use a project property to indicate if the Makefile
is to be maintained automatically or simply used as-is.  Perhaps
a similar flag on projects is called for (use internal builder
vs. use external build command).
 
> == scenario 3: Information for other modules.
> For example, the debugger needs to know the src paths for the 
> sourceLocator to find
> files when stepping.  The parser may need some macros, 
> definition to do parsing
> correctly.  The indexer, search, refactor may also need the 
> include paths, in C++
> the entire definition of the class can be inline in a header, 
> the class definition
> is needed to do correct code complete/assist.  Some work was 
> started in the 
> ICBuilder but we did not have a chance to refine.

All these pieces of information (source file locations, macros,
include paths, etc.) should be exposed by the build model in a
generic way.  One way to do this is to add convenience methods
to the ICBuildConfig interface to retrieve standard information
(getSourcePaths(), getIncludePaths(), get getCppMacros(), etc.)

> == scenario 4: Extension points.
> Can it(should it) be possible to overload the entire Build Model?

I don't know.  My inclination is to say "no", though others may
disagree.

> Where the other extension points?  for example for the error parsers ?

Not present yet.  Still a work in progress.
 
> == scenario 5: Integration with the current CDT
> In the CDT core there is a framework for extension points it 
> is save in the
> project, in a ".cdtproject" file.  The ".cdtproject" file is 
> somehow equivalent
> to the ".project" but for the CDT so it is possible to 
> export/share information.
> For example when checking out a C/C++ Project, the binary 
> parser type, build model type
> etc .. can be shared.  Here is an example of what is generated now:
> 
> <?xml version="1.0" encoding="UTF-8"?>
> <cdtproject id="org.eclipse.cdt.core.make">
>     <extension id="org.eclipse.cdt.core.makeBuilder" 
> point="org.eclipse.cdt.core.CBuildModel">
>         <attribute key="command" value="make"/>
>     </extension>
> </cdtproject>
> 
> Sharing properties/settings(via cvs or other means), seems to 
> be important for some of our
> customers .. they even ask to share breakpoints ... sigh ..

The current implementation is set up to save build config settings
to one or more files (one per build configuration).  The files are
plain text, modeled after launcher config files, and exist explicitly
because of the need to share build configs.

Having one file (.cdtconfig) or merging the configuration information
into the .cdtproject file is possible.  Is there an way to inject (and
later extract) arbitrary information into the .cdtproject file?

Haven't implemented this yet, but the concept of "currently active
build configuration" for a project probably should be a local,
persistent project property.

-Samrobb
_______________________________________________
cdt-dev mailing list
cdt-dev@xxxxxxxxxxx
http://dev.eclipse.org/mailman/listinfo/cdt-dev


Back to the top