Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
RE: [cdt-dev] Build model proposal

> Sam> 1) XP that declares a known file type mapping
> Sam>    (ex, "*.c" --> "C Source File")
> 
> In the distant past I worked on an internal IDE (the build part was
> called "vmake") that worked along these lines.  One important point
> here is that the mappings must be host-independent -- for instance,
> they can't be case-sensitive.  If the mappings are host-dependent,
> then the build description can't be shared on a multi-host project.

I've run into this issue, particularly when cross-compiling.  What's
the level of the issue?  If it's just an annoyance ("*.c" and "*.C"
are both C source files), then being case-insensitive is fine.  If
there are systems where a "*.c" means C source and "*.C" means
something entirely different, that's a different can of worms.
 
> Sam> Having the FMM Returning a unique identifier instead of an
> Sam> executable name made it possible to introduce and support the
> Sam> concept of multiple toolchains.  A toolchain consists of a group
> Sam> of related tools (ex, GNU tools for x86, GNU tools for PowerPC,
> Sam> etc.)
> 
> I'm curious to know how this will interact with defining the notion of
> a target -- another to-do item that came out of last week's CDT
> meeting.
> 
> For instance, it would be weird for the user to choose a toolchain and
> then elsewhere have to choose the debugger for that toolchain.  It
> would be more convenient to make a single target selection.
>
> Also, this assumes that a project can only have one target.

One of our use cases is "board evaluation" - running the same code
against two or more boards, potentially with different processors.
In this case, you'd want multiple targets for the project.

I think the relationship is more along the lines of:

- A project has one or more build configurations.
- A build configuration is associated with a toolchain.
- A build configuration is associated with a target.
 
Or even better:

- A project has one or more build configurations.
- A build configuration is associated with a target.
- A target is associated with a toolchain.

This may be the more reasonable depiction of the relationships.

> There's a tension between building something achievable (meaning:
> relatively simple) and something that can scale.  I'd like us to at
> the very least be explicit about our limitations, and in some cases
> design the interfaces with the idea that the initial implementation
> will be limited and then later expand.

Agreed.

> Sam> So, given a source file, the steps for determining which tool to
> Sam> use while compiling it are roughly:
> Sam> 1) Obtain a reference to the file resource.
> Sam> 2) Pass the file reference off the file type manager.
> Sam> 3) The file type manager returns the identifier for the tool that
> Sam>    should be used to compile the file.
> 
> Since you're only passing in the type of the input file, I think this
> presumes that only a single sort of action can be performed on files
> of a given type.
>
> But in general this isn't so.  For instance, it isn't uncommon to both
> compile a C++ file and also run doxygen on it.

Good point - Martin Lescuyer pointed out something similar in a previous
message.  At least for the moment, the only solution I can see to this
sort of problem would be to provide for "internal builders" for files the
same way that Eclipse allows for "external builders" for projects.  The
problem here is that while I think this is the most flexible way of
handling this type of thing, it's also potentially confusing.

> Sam> Projects are a special case, in that there are certain options
> Sam> (linker settings, for one) that apply only to a project.
> 
> The assumption here is that a project has a single linked object as
> its result.  However, many projects have multiple outputs.  In fact,
> in the GNU world this is probably the most common sort of project.
> 
> Also, sometimes people want .o files as the final build product.  I
> don't recall why offhand (kernel modules maybe?), but the automake
> mailing list archives have a bunch of examples.

I was using "link" as a shorthand for "whatever it is that you actually
do to produce whatever it is that this project is supposed to be
building."  (You can see why I wanted to use another term...)  Depending
on what the project is doing, "link" could mean:

1) Combining files into an executable
2) Combining files into a shared library
3) Combining files into a kernel module
4) Packaging files as a static library
5) Something else entirely 

As for "single project, multiple outputs"... I know that's common in GNU
projects, but I wonder if the differences between how a GNU project is
set up and how a new Eclipse/CDT project would be set up are large
enough that it's not neccesarily worth worrying about.

> Sam> Shared libraries, static libraries, and plain executables all
> Sam> have very different link stage requirements.
> 
> Note that building shared libraries can also affect the ordinary
> compilation phase.  On many platforms you need to generate PIC code.
> It would be nice if the builder handled this automatically -- not
> requiring the user to enter an explicit `-fPIC'.

This gets into the idea that the type of project influences how you
build it.  While you'd like to confine this to the "link" stage,
shared libs are one place where this idea breaks down... and I'm
sure there are others.
 
> One thing I don't understand from this model is: where does the build
> actually occur?
>
> Is there code in CDT core that uses the toolchain and FFM interfaces
> to get information, but then does the build internally?  Or is the
> builder itself hidden behind an interface, which is then invoked by
> the core?

Ideally, we will have a system that's capable of operating in a way
very similar to the JDT model - building a file is handled internally
within the IDE, but there is a method of generating a build script
that will produce the same output.

> If the CDT core has the build logic inside it, then we also need a way
> to get information for dependency tracking.  This could be added to
> the tool interface.  After invoking the tool, we would query it to see
> if there are any discovered dependencies.  The builder would keep this
> information internally and use it to drive future build decisions
> (there are various reasons to do this as a side effect of the build).
> In vmake we automatically discovered dependencies not only for
> compilations but also links.

Depending on how well the new parser work goes, we may be able to rely
on pulling information from the CDOM.  I would much rather do this than
rely on "gcc -MM" or the equivilient... still, perhaps the best course
of action would be to let the tool return the information; whether it
comes from the CDOM or some other source would then be irrelevant.

> Many people these days use distcc and ccache to speed up their builds.
> I suppose this could be done via a new toolchain that somehow wrapped
> another toolchain?  Or perhaps some more direct integration?  A
> similar issue comes up when you want to run something like purify,
> where you prefix the link step with the `purify' command.  Food for
> thought.
 
I think that some sort of direct intigration would be nice; for the
initial design, perhaps some concept of toolchain adaptors would
be enough.

-Samrobb


Back to the top