Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cdt-dev] Build model proposal

Tom> When generating RPMs we want to know not only how to build things,
Tom> but where they should be installed.  Traditionally, in the free
Tom> software world, installation is part of the build process.  Will it
Tom> be in Eclipse?  Or will it be an orthogonal issue?

I have been thinking quite a bit on this in just looking at the rpm
plug-in. As the list of platforms that Eclipse currently runs on is so
diverse, and likely to expand, I'm not sure I can see it anything but an
orthogonal issue. While installation on a Linux system might be similar
to say a QNX platform, it would be quite different than the installation
on Windows system that may require registry entries etc. We know this,
but how do you approach it? How do we ascertain the resultant build
output (executables, libraries) and where they might be installed on a
single system, IE Linux? /usr/sbin or /usr/bin or /bin and so on. 

I suppose we could make the tool to the common denominator, but then in
Windows there are already several professional programs out there that
only concentrate on these installation issues. Install-shield is a
hugely complicated tool - and for good reason; it has to overcome and
facilitate a number of issues just on the Windows platform.

I suspect we are going to have to have some form of deployment mechanism
beyond the notion of dumping the output from a compiler in a particular
directory. In that respect, it opens up a whole other avenue of
discussion. Certainly something to ponder over the holiday period ;)

phil 

On Wed, 2002-12-18 at 14:09, Tom Tromey wrote:
> >>>>> "Sam" == Robb, Sam <sam.robb@xxxxxxxxxxx> writes:
> 
> Sam> Following is a proposal for dealing with the build model in the
> Sam> next major CDT release.
> 
> Thanks.  I have some questions and comments about this.
> 
> Sam> 1) XP that declares a known file type mapping
> Sam>    (ex, "*.c" --> "C Source File")
> 
> In the distant past I worked on an internal IDE (the build part was
> called "vmake") that worked along these lines.  One important point
> here is that the mappings must be host-independent -- for instance,
> they can't be case-sensitive.  If the mappings are host-dependent,
> then the build description can't be shared on a multi-host project.
> 
> Sam> Having the FMM Returning a unique identifier instead of an
> Sam> executable name made it possible to introduce and support the
> Sam> concept of multiple toolchains.  A toolchain consists of a group
> Sam> of related tools (ex, GNU tools for x86, GNU tools for PowerPC,
> Sam> etc.)
> 
> I'm curious to know how this will interact with defining the notion of
> a target -- another to-do item that came out of last week's CDT
> meeting.
> 
> For instance, it would be weird for the user to choose a toolchain and
> then elsewhere have to choose the debugger for that toolchain.  It
> would be more convenient to make a single target selection.
> 
> Also, this assumes that a project can only have one target.
> 
> There's a tension between building something achievable (meaning:
> relatively simple) and something that can scale.  I'd like us to at
> the very least be explicit about our limitations, and in some cases
> design the interfaces with the idea that the initial implementation
> will be limited and then later expand.
> 
> Sam> So, given a source file, the steps for determining which tool to
> Sam> use while compiling it are roughly:
> Sam> 1) Obtain a reference to the file resource.
> Sam> 2) Pass the file reference off the file type manager.
> Sam> 3) The file type manager returns the identifier for the tool that
> Sam>    should be used to compile the file.
> 
> Since you're only passing in the type of the input file, I think this
> presumes that only a single sort of action can be performed on files
> of a given type.
> 
> But in general this isn't so.  For instance, it isn't uncommon to both
> compile a C++ file and also run doxygen on it.
> 
> Sam> Projects are a special case, in that there are certain options
> Sam> (linker settings, for one) that apply only to a project.
> 
> The assumption here is that a project has a single linked object as
> its result.  However, many projects have multiple outputs.  In fact,
> in the GNU world this is probably the most common sort of project.
> 
> Also, sometimes people want .o files as the final build product.  I
> don't recall why offhand (kernel modules maybe?), but the automake
> mailing list archives have a bunch of examples.
> 
> Sam> Shared libraries, static libraries, and plain executables all
> Sam> have very different link stage requirements.
> 
> Note that building shared libraries can also affect the ordinary
> compilation phase.  On many platforms you need to generate PIC code.
> It would be nice if the builder handled this automatically -- not
> requiring the user to enter an explicit `-fPIC'.
> 
> 
> One thing I don't understand from this model is: where does the build
> actually occur?
> 
> Is there code in CDT core that uses the toolchain and FFM interfaces
> to get information, but then does the build internally?  Or is the
> builder itself hidden behind an interface, which is then invoked by
> the core?
> 
> If the CDT core has the build logic inside it, then we also need a way
> to get information for dependency tracking.  This could be added to
> the tool interface.  After invoking the tool, we would query it to see
> if there are any discovered dependencies.  The builder would keep this
> information internally and use it to drive future build decisions
> (there are various reasons to do this as a side effect of the build).
> In vmake we automatically discovered dependencies not only for
> compilations but also links.
> 
> 
> Many people these days use distcc and ccache to speed up their builds.
> I suppose this could be done via a new toolchain that somehow wrapped
> another toolchain?  Or perhaps some more direct integration?  A
> similar issue comes up when you want to run something like purify,
> where you prefix the link step with the `purify' command.  Food for
> thought.
> 
> 
> When generating RPMs we want to know not only how to build things, but
> where they should be installed.  Traditionally, in the free software
> world, installation is part of the build process.  Will it be in
> Eclipse?  Or will it be an orthogonal issue?
> 
> Tom
> _______________________________________________
> cdt-dev mailing list
> cdt-dev@xxxxxxxxxxx
> http://dev.eclipse.org/mailman/listinfo/cdt-dev




Back to the top