Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cdt-dev] Future of Managed Build

Hello,

@Jantje sorry, I was a bit disoriented in the thread and missed the real author of the ideas. Please find my comments below. >I have been trained as an architect so I like to do the opposite. Start from high level very complex and drill down. This is what I'm trying to do now: Yes, very important points. However we will need to talk much more until we will have a kind of common language. What I suggest is EMF-oriented approach and not "a tool" to play with project setup. But it is relatively easy to get such a tool having all the ecore schemas defined.

> 1: agree on the model functionality
there is no "model functionality" planned at all, because it will not be a part of the model. The strongest side of EMF is "define and manage complex data structures". So, the "functionality" will be mostly all outside the "model", but it will be "configured by" the model. The "functionality" will be: 1) "transform" the user data from a particular project to a set of manifests for the selected external build tools 2) run that build tools 3) and utilize the output.

> 2: agree on how the programmer will have to work with the modeller
Programmer of CDT or end user? End user will have nothing to do with "modeler" - it will be either editing project properties via UI like today or (for the really brave people) direct editing of project metadata manifests. I expect we will have a number of new "dot resources" near .cproject to store that metadata. For CDT programmer it will be more or less usual work to "extend the framework". We can try to reduce "the technology impact" to make it easier. But as I see many of us are already using modern things like NodeJS, and "yarn" during the CDT build. So, adding another beast in the Zoo will not change the climate significantly.

> 3: agree on what should come out of the model/project component
A set of "task-oriented" "manifests" to build, run, debug, deploy or whatever else we may want to do with the project content

> 4: agree on what will be part of what
as we have very different points of view at the moment, I would suggest to not talk about "replacing a very old part of CDT" today. Personally, I would like to get rid of "language settings" and all the ugly code around - but today it is not possible, however it will be cool to start this during 10.x version

> 5: agree on who will do what/when
unclear at the moment, especially in the form of commitment

>I think it is important to have our requirements written down so Alexander can take these to the project and alarm us in case our requirements will not be met.
Yes, please, I vitally need an input from domain experts.

> I have no idea what 2 and 3 will look like. Maybe Alexander has no idea. I also have no idea how much our input is appreciated. Your input is highly appreciated. Both regarding interior (interaction with build tools) and exterior (interaction with user).

> Backup plan: If the Alexander modelling route fails (earthquake,corona virus...) for sure I have no power to guarantee the delivery from the so called "better world"

> but we know what the model and meta data should look like we can model it using xml like the current managed build does. Still producing the same 3. yes, exactly, the schema (or grammar, or ecore) is a key artifact in this story - all the rest are variants of implementations, I suggest EMF-based one to be utilized by CDT as we already tried this approach for other domains and for similar tasks and it works.

@Liviu please find my comments below

@Jonah may be you can re-route us to the proper Wiki page to collect the outcome? the thread becomes difficult to structure.

Regards,
AF

31.01.2020 12:39, Liviu Ionescu пишет:

On 31 Jan 2020, at 03:11, Jan Baeyens <jan@xxxxxxxxxx> wrote:

... I have asked the question here on the dev list:"Do we have the requirement of readable makefiles". I think -as long as the build works- nobody really cares whether it is make, cmake, ninja who is doing the build lifting. If you disagree please elaborate.
Well, I really don't know, I can't even say I disagree, just that, as you also said before, things go to places where I don't feel comfortable to be in (like very generic modelling tools), or that I personally do not like (like cmake).

I have nothing agains such theoretical approaches, as long as the results have reasonably simple practical applications, I don't have to use them directly, and they do not artificially complicate my run-time.
I'm sure we will be able to manage the complexity. For example (please don't get it literally) "defining MBS DSL" and "using MBS DSL" are quite different activities and definitely should not be mixed.
When designing a build system, I identified two problems: how to define the tools that you want to use in the build (like the compiler, linker, archiver, etc), and how to define the dependencies between different parts (files?) that go into the build.
I'm sure we are talking about the same things, what we need is to sync our terminology. For me "how to define the tools" means "design the metamodel that contains sufficient information about a tool". The same is "how to define the dependencies between different parts (files?) " - I believe that there should be a metamodel entity that describes that "part". And this is quite practical exercise because having a metamodel allows to do a lot of things: 1) On IDE side we can employ dozens of EMF-based frameworks: visualizers, editors, generators for any kind of sources and manifests. 2) On CLI side we are still free to process our artifacts with anything because we know the grammar. 3) Please don't think that EMF==XMI, this is just a default way to persist a model instance.


Basic builders may use implicit rules to define dependencies (like deriving the tool from the file extension and building executables, libraries). More elaborate systems may need explicit rules.

Defining toolchain capabilities is another grey area, where things can be as simple as a set of static definitions, or as complex as defining plug-ins.
Again, "defining toolchain capabilities" is good for modeling, but "basic builders" are "services" that operates with model instances - they are using the data provided by modeling, but they are not a part of modeling. Ideally the "base builder" should update "derived" (most probably generated) build manifests for a third-party tools (ninja, why not), run it and utilize the output.

---

Regardless what methods you chose, after you completely process a project, you end up in memory with a graph of all dependencies and all details required to invoke each tool. Then you convert this graph into a configuration file (let's say a ninja.build) and give it to the builder, hoping that it'll recreate the same graph in memory, and perform the build.
May be, however I still believe that we will be able to let the invoked tool resolve "a graph" for us. Because this is the main idea of declarative approach: tell me "what" and I will figure out "how".

Since I'm biased towards ninja, in my tools I decided to organise the internal graph as close as possible to how the ninja input data is structured, such that exporting the configuration file be simple.

Even more, although in the initial releases I will continue to invoke ninja (or make), in a future release I'm considering going the extra step and run the build internally, since apparently I already have the complete image of the project.

---
I hope the metadata will be tool-agnostic, but to fulfill it we will need to consider several build systems. The ninja support should be one of the pluggable choices.


However, once you identify a solution, I suggest you try to answer some simple questions, like:

- is it possible to create projects and run builds outside Eclipse, for example from a script in a simple CI environment, like Travis?
- is it possible to extend this solution to create projects and run builds in another IDE, or not even IDE, in a more advanced editor, like Visual Studio Code?
This is definitely should be supported, and there is a number of approaches how to achive this. The key is that every build manifest has defined schema (or ecore, or grammar) behind it and can be "deserialized" in another eco-system. The "base" format (XML, JSON, DSL) is a secondary question because having schema (or ecore, or grammar) allows to make it transparent.

---

I'll personally proceed with my xPack command line tools, and once they are functional I'll make the decision if it is worth integrating them first in Eclipse or go directly to VSC.


Regards,

Liviu

_______________________________________________
cdt-dev mailing list
cdt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.eclipse.org/mailman/listinfo/cdt-dev




Back to the top