Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cdt-dev] Build model proposal

>>>>> "Sam" == Robb, Sam <sam.robb@xxxxxxxxxxx> writes:

Sam> Following is a proposal for dealing with the build model in the
Sam> next major CDT release.

Thanks.  I have some questions and comments about this.

Sam> 1) XP that declares a known file type mapping
Sam>    (ex, "*.c" --> "C Source File")

In the distant past I worked on an internal IDE (the build part was
called "vmake") that worked along these lines.  One important point
here is that the mappings must be host-independent -- for instance,
they can't be case-sensitive.  If the mappings are host-dependent,
then the build description can't be shared on a multi-host project.

Sam> Having the FMM Returning a unique identifier instead of an
Sam> executable name made it possible to introduce and support the
Sam> concept of multiple toolchains.  A toolchain consists of a group
Sam> of related tools (ex, GNU tools for x86, GNU tools for PowerPC,
Sam> etc.)

I'm curious to know how this will interact with defining the notion of
a target -- another to-do item that came out of last week's CDT
meeting.

For instance, it would be weird for the user to choose a toolchain and
then elsewhere have to choose the debugger for that toolchain.  It
would be more convenient to make a single target selection.

Also, this assumes that a project can only have one target.

There's a tension between building something achievable (meaning:
relatively simple) and something that can scale.  I'd like us to at
the very least be explicit about our limitations, and in some cases
design the interfaces with the idea that the initial implementation
will be limited and then later expand.

Sam> So, given a source file, the steps for determining which tool to
Sam> use while compiling it are roughly:
Sam> 1) Obtain a reference to the file resource.
Sam> 2) Pass the file reference off the file type manager.
Sam> 3) The file type manager returns the identifier for the tool that
Sam>    should be used to compile the file.

Since you're only passing in the type of the input file, I think this
presumes that only a single sort of action can be performed on files
of a given type.

But in general this isn't so.  For instance, it isn't uncommon to both
compile a C++ file and also run doxygen on it.

Sam> Projects are a special case, in that there are certain options
Sam> (linker settings, for one) that apply only to a project.

The assumption here is that a project has a single linked object as
its result.  However, many projects have multiple outputs.  In fact,
in the GNU world this is probably the most common sort of project.

Also, sometimes people want .o files as the final build product.  I
don't recall why offhand (kernel modules maybe?), but the automake
mailing list archives have a bunch of examples.

Sam> Shared libraries, static libraries, and plain executables all
Sam> have very different link stage requirements.

Note that building shared libraries can also affect the ordinary
compilation phase.  On many platforms you need to generate PIC code.
It would be nice if the builder handled this automatically -- not
requiring the user to enter an explicit `-fPIC'.


One thing I don't understand from this model is: where does the build
actually occur?

Is there code in CDT core that uses the toolchain and FFM interfaces
to get information, but then does the build internally?  Or is the
builder itself hidden behind an interface, which is then invoked by
the core?

If the CDT core has the build logic inside it, then we also need a way
to get information for dependency tracking.  This could be added to
the tool interface.  After invoking the tool, we would query it to see
if there are any discovered dependencies.  The builder would keep this
information internally and use it to drive future build decisions
(there are various reasons to do this as a side effect of the build).
In vmake we automatically discovered dependencies not only for
compilations but also links.


Many people these days use distcc and ccache to speed up their builds.
I suppose this could be done via a new toolchain that somehow wrapped
another toolchain?  Or perhaps some more direct integration?  A
similar issue comes up when you want to run something like purify,
where you prefix the link step with the `purify' command.  Food for
thought.


When generating RPMs we want to know not only how to build things, but
where they should be installed.  Traditionally, in the free software
world, installation is part of the build process.  Will it be in
Eclipse?  Or will it be an orthogonal issue?

Tom


Back to the top