Community
Participate
Working Groups
The current behavior of our the optimization function is to favor the highest version of the installable unit that is available. This can result in situations where the install / uninstall or update of one particular IU causes an update on something unrelated just because there is a better version available. A few things to consider while looking at this: - We need to be careful with patches and make sure that they still take precedence when install optionally - Which version is picked on an initial install (do we keep the same behavior than today?)
*** Bug 248692 has been marked as a duplicate of this bug. ***
*** Bug 249266 has been marked as a duplicate of this bug. ***
In order to fix such issue, a notion of distance against the current install is needed. That way, we could minimize the distance between the installed plugins and the update. The main issue is to find the right measure of distance, and the corresponding weight in the optimization function.
While thinking through the consequences of such change I thought about the following use case that we would have to be careful about especially when installing pieces of the same software in two different steps. Example: - B requires Thing.Core [1.0.0, 2.0.0) - C requires Thing.UI [1.0.0, 2.0.0) - First, I install B, it picks up Thing.Core 1.1.0.v20081010, the highest version available at the time of the installation - Second, I install C and it picks up Thing.UI 1.1.0.v20081212, the highest version available at the time of the installation. However the versions of Thing.Core and Thing.UI have been produced by two different I builds and are not meant to work together (for example there is a Thing.Core 1.1.0.v20081212 that should be used instead) and the resulting installation does not work at runtime (e.g. NoSuchMethodError). With the current optimization function (always prefer the highest version), this example works because we will always max out the dependencies. Still scenarios can be constructed where a similar problem could occur, however given that we are not "staying back" its occurrence is diminished. Note that here, I'm making the assumption that at install time we are still picking up the highest version because with the information available in the metadata we can not guess any better since we don't have information that could help the solver correlate what is installed with what should be installed. We have already explored having the notion of affinity / line-ups however we have differed that from 3.5.
FWIW I can confirm that this is a real problem experienced in the field by folks installing something and getting a bunch of stuff updated.
Pascal, Your scenario should not happen if there is a dependency from Thing.UI 1.1.0.v20081212 to Thing.Core 1.1.0.v20081212. Remember that the constraints limit the possible solutions while the objective function discriminate among the possible candidates the "best" one. So if the dependencies are correctly expressed, you should not end up with a broken system. Daniel
* Tightening dependencies Your analysis is correct Daniel, however most plug-in dependencies are unfortunately not specified this way and in a typical eclipse way Thing.UI depends on Thing.Core [1.1.0, 2.0.0) and I agree that tightening the dependencies is likely part of the solution (which is what we would get by line-ups). In order to tighten the dependencies, we would have to have PDE build to tweak the lower bound of the dependencies at build time. This can probably be done but would require some non trivial work on the PDE side of things which would include the need for some additional markup to identify which dependency needs to be tweaked (for example it probably only makes sense to tweak the dependencies on things that are friends and things that are part of the same "component" (e.g. all the p2 plug-ins together)). Also this would make the patching a bit harder than it is today since feature patches would have to be written to change the dependencies both in the group and in the plug-ins. * Not Updating automatically Another thing to consider is that with such a behavior then someone needs to explicitly install something they don't know about to cause the rev up of the underlying pieces unless we have an option to allow the user to sometimes use the "higher is better" encoding. The example I have in mind here is the following: the user only installed B (we ignore C in this example), consequently he got Thing.Core 1.1.0.v20081010 but he does not really care, this is an implementation details. Now when an update of B 1.0.1 comes in he would not get the new version of Thing.Core 1.1.1 which makes things better. To get it, he would have to explicitly install the Thing group also bringing down things that he does not need (e.g Thing.UI) and somewhat breaking the encapsulation. One could argue that B 1.0.1 could depend on Thing.Core 1.1.1 but they are not necessarily released on the same schedule not does B's author wants to preclude the usage of B with the old Thing.Core 1.1.0. Again, I'm not downplaying the issue, however it needs to be clear to everyone the kinds of "advantage" we are getting by the current behavior and that changing the optimization function to do something else will end up trading the annoyed users to happy ones and vice-versa. With 3.4 the only problems I know about in the field are those were the user wants to install a patch and the patch is not applicable. The inapplicability goes undetected and we proceed with updating some plug-ins. In 3.5 we are looking at addressing that by having the UI properly detect that the requested oeration will not proceed and consequently not do undesired updates. Still this is not ideal but helps a little.
the other problem is when people install stuff where some of the prereqs are already installed. The installed versions are in range for the new things but they get updated anyway. This leads to some unexpected and undesired behaviours
Jeff, that's because the SAT solver does not know what is already installed. we need to find a nice way to provide in a that information to the solver. maybe a solution would be to remove from the optimization function the penalty for the versions of IUs already installed. That way, solutions with already installed IUs would always be better than solutions with updated (or downgraded) IUs.
right. something like that would do the trick
But Pascal's concerns about the effect of changing the current behavior of the update process will still be valid. I think an option in update preferences will be needed to allow people to either continue with the current process or to choose a more "cautious" one. My feeling is that people relying on the open source wersion of Eclipse and grabing plugins over the net from various update sites are likely to be happy with the current update procedure while people relying on specific supported releases of Eclipse would prefer the cautious approach.
In our use case we can't allow users to decline updates because of the current optimization scheme. The use case is as follows: The user installs a profile which requires A_1 and B_1. Later, the user checks for updates and sees that A_2 and B_2 have been released. The user wants to keep A_1 but upgrade B_2. Because of the current optimization scheme, the only way we can support that is to lock version A at version 1 and then ask for a provisioning plan. In the simple case where A and B have no interdependencies, this wouldn't be an issue. But if A_1 and B_1 both have dependencies on a complicated chain of requirements that eventually requires C_1, locking A at version 1 may cause failures if B_2 requires C_2 and A_1 can't use C_2. In this situation, you just want the installed software to be prioritized above other versions of the software rather than having a hard requirement. A proposed solution: In the Projector.createOptimizationFunction() method we could specify the currently installed software as the one with the highest priority. Then, during upgrades, we could specify the version of the software that the user specified as a hard requirement to force upgrades. I'm not exactly sure how the patch technology works, could somebody outline how that functionality could fit in to this solution? Or is there a better solution that I'm missing? Finally, I don't think we need to change behavior during initial installation. Choosing the latest and greatest by default makes sense, and if we want to we can always add UI for specifying the exact version of features which would constrain the installation process.
Sorry Jed but I'm not completely grasping the initial scenario (e.g. did the user installed A_1 and B_1?), do you use the term profile as p2 profile or Pulse profile? Could you create a planner test demonstrating the scenario? This would be very helpful. Thx.
*** Bug 278378 has been marked as a duplicate of this bug. ***
Bug 278378 is a better use case than this one (not sure why it's been marked as a dupe; they're different bugs). If there are updates available to the platform, but I request the uninstall of a feature locally, no updating should occur. It's actually unrelated to 'what is in the platform at the moment'; what's needed is the ability to go from: X_1 Y_1 to X_1 even when X_2 might exist in a remote repository.
User's don't really care about the fact that we are using a common mechanism to manage all additions, updates and removals from the system. They just want us to do what they tell us to do. ;-) I.e. if we tracked the nature of the request being made, then presumably we could use that information to make a more intelligent decision about the transformation we should make.
I think trying to guess only the subsystem that actually needs to be modified could result is worse situations. Here a use case A repository contains A1, A2, A3. B depends on A Today's behavior - we guarantee that independently on how you get to a state, the result of having a given set of roots installed implies that you are running the same bundles. User 1 - Install B and A1 => he ends up with B and A1 installed. - Uninstall A1 => he ends up with B and A3 - At this point the only root installed is B. User 2 - Install B => B and A3 is installed - At this point the only root installed is B. If changes were being done to address stability, then the final state of User 1 scenario would end up being B and A1 (because at some point he had installed A1) and indicates that the root he has installed is B and this is not the same set than what User 2 is running. Right now the approach we are thinking about to solve this problem is to work on a concept of "line-up" which would define a preferred version for each IU of the system and be used to drive the optimization function. However composability of line-up is unclear and could still end up in similar situations.
I'm all for handling interesting cases correctly. What I was arguing for was just using all of the information you *did* get from the user when deciding what to do. As a user, for example, I would think of uninstalling something as being unrelated to whether or not there are new versions of things available. Indeed, I wouldn't even expect us to need to be checking.
*** Bug 290784 has been marked as a duplicate of this bug. ***
I would say really only install other dependencies if those are really dependencies that are needed by the plugin/feature that are selected by the user. And do report once what is really getting installed.. Dont do that under the hood. So if you have A (1.0) and B (1.0) both depending on C (1.0) then i do an update of A (1.1) and that still says it can be build on C (1.0) dont upgrade C to 1.1 even if that one is there. Keep it that way. If i upgrade A then to 2.0 and it says i have to use C 2.0 also then you have to do that if then B didnt specify that as a supported dependency there is no other way then also to upgrade B to 2.0 But do report this to the user dont just install it.. What does eclipse do now if B for example doesnt have any upgrade to 2.0 at all so it is not compatible with the thing C 2.0 that you are now upgrading to? Does eclipse reports this to the user? And tell him that B wont work? With the option to cancel the install? In the field we really had our problems last week with a update that we rolled out for our product. If users did do check for updates it did also get eclipse 3.5.1 but even when a user didnt select that one but only our product the installer banged out because the update site of eclipse was not really working. This really should be avoided. It cant be that if i just select our product that must be updated that the update fails because it tries to install something from an url that is broken. Now you really have to go into the preferences and disable a certain url to be able to go on. For our users this is just strange they have no idea what to do. The same goes for optional dependencies our product depends optional on another plugin from another url. If that url is enabled and i install or check for updates on our product. Then i also see that optional dependency but if i only select our product because i am not interested in the optional plugin it is getting installed anyway. Again the eclipse installer is to smart it does way more then the user did ask for. And again the only thing to not install the optional dependency is to really disable the right url in the preferences.. to sum up, eclipse updater is just way to smart.. Do what the user asks and nothing more and if there are then things it has to install rapport this to the user and if there are conflicts because of different plugins that has different dependecies again report this.
*** Bug 294036 has been marked as a duplicate of this bug. ***
I just committed on R3_6_api_cleanup branch a new optimization function. The new optimization function will keep installed IU that that satisfy the requirements, and will uninstall installed IUs that are no longer needed. All current planner tests do pass with the new optimization function. I need now to build new test cases to make sure that we are going on the right direction. I will start with the cases already discussed here. Pascal proposes to create a patch against 3.5.1, available from an update site, to allow people to check the new optimization function. I will post the URL here when it is ready.
*** Bug 297357 has been marked as a duplicate of this bug. ***
It feels wrong to get updates for unrelated content, thanks for addressing this, I'll definitely be available to test a 3.5.1 fix if you got one :)
(In reply to comment #23) I'm wondering how this change is evolving, is M6 going to include it? How can we switch the resolver method to the new optimization method? We are eager to see this released, so if you have anything we can help you with, maybe testing, let me know. Helmut
Helmut, the new optimization function is in M5. Please let us know if it fits your needs.
Daniel fixed this in M5.
*** Bug 265607 has been marked as a duplicate of this bug. ***
This change is rather revolutionary. It breaks some of the general assumptions about how the p2 resolver works and don't think there is a "one size fits all". We need to think this over. Is there any chance that the user could be given the choice: "Update everything to the latest" Or perhaps: "Favor updates over already installed material" Or even: "Favor what's installed over newer versions in repository" (unchecked by default) I think many users actually think the above cases are the defaults. I was one of them and that assumption controls the default behavior for the generation of version ranges Buckminster. This fix will likely force us to do major changes. The use-case described in this bug is very valid but there are other cases that should be considered.
In p2cudf, we have implemented several objective functions for p2 using the strategy design pattern. We could do the same in the regular p2. For instance, we could ship with two objective functions: the old one and the new one, and let integrators of p2 decide which one to choose. If you are willing to see such new feature, please open a new bug on bugzilla.
(In reply to comment #0) > If you are willing to see such new feature, please open a new bug on bugzilla. I'm not sure that is what I want. My concern is not different p2 integrators. It's more what happens when updating the Eclipse IDE. On 03/08/2010 09:55 AM, Haigermoser, Helmut wrote: > +1 for stability (vs. overeagerness to update anything that's not in the provplan ) I would agree if those were the two choices. Unfortunately, the ramifications extend beyond that. Consider my example with feature A and bundle B in the forum thread [1]. I update A and expect B to be updated since it's directly referenenced from A. That however, does not happen. I'm left with a new A and the old B. That's under-eagerness. Stability can be controlled without the bugfix. If I, as an author of A, favor stability and really want to prevent the installation of a newer version of B, then I declare a strict dependency from A to B, effectively preventing any such update. If I don't declare a strict dependency, I'm making the statement "I really want new versions of B when they become available". It's up to me as the author of the feature to decide. [1] http://dev.eclipse.org/mhonarc/lists/p2-dev/msg02789.html
(In reply to comment #32) Hey Thomas :) For me this bug is about p2s overeagerness to update things that are not part of the provplan AND are not even related to the things that get updated. An example: IU a (1.0.0) IU b (1.0.0) depends on a IU c (1.0.0) has no dependencies to either a or b We've seen examples of p2 updating c even if the prov plan merely asks for updates to "a" or "b", that's what I meant by it being "overeager". As for your example, one could argue that stability should be first, and if you really wanted bundle "B" be implicitely updated by updates to feature A then you should set the dependencies such that they would fail if the "B" was not updated. However, I'm not opposed to the idea of updating in your use case, however, if the decision is between two evils (overeager vs. stability) I'd vote for stability all the time, and craft the dependencies such that updates happen the way I want them to happen... Helmut
(In reply to comment #33) > and craft the dependencies such that updates happen the way I want them to happen... I agree with that. And in fact, that's what I'm saying too :-) If I craft a dependency with a range such as [1.0.1,1.0.1] then I say "Stability first". If I craft it as: [1.0.0,1.1.0), then I say, "I want updates to happen".
Let me take an example that highlights the problem: 1. User A installs from repository X. 2. Repository X is updated with new content. 3. User B installs from repository X. At this point user B has newer content then user A. 4. User A updates from repository X. Now I would expect users A and B to have the same thing. I can actually verify that this is the case by looking at the installed features in the install wizard. If I take a look at the installed bundles however, I see that it's not the case at all. This is one bad consequence. Another, and perhaps worse, is that there is no way for user A to get what user B has! Not without first uninstalling the affected features.
(In reply to comment #35) > Let me take an example that highlights the problem: > > 1. User A installs from repository X. > 2. Repository X is updated with new content. > 3. User B installs from repository X. > > At this point user B has newer content then user A. > > 4. User A updates from repository X. > > Now I would expect users A and B to have the same thing. Users will have the same thing for - explicitly updated content - content that MUST be updated due to tight dependencies They won't have the same thing for content that is not implicitly updated, they will get the most stable version considering what already was installed. Think of a scenario where user A used to work for months with his state, and then decides to explicitly update content, he would not want to update content that the dependencies state will still work... HTH, Ciao, hh
I havent looked at the current impl of 3.6 but 3.5 just does weird things that i dont expect to happen. Especially when i say install new software, then i dont expect that eclipse updates stuff that is completely outside of what i want to install as new software, Also in our product we have a optional dependency on the SQL Explorer plugin. But if the sql explorer plugin update site is enabled, but a user never installed it, and it gets an update of our plugin then (parts) of the sql explorer plugin is installed also, completely without the user telling eclipse that it should do that. (and eclipse also doesnt really mention that when you walk over the update process) Thats not what i expect. I expect that eclipse only installs what i select when i do install new software, And if that new software needs a newer version of something that is already installed i want to be informed about this before proceeding. If i do check for updates then eclipse should check everything. And give me the options that it want to update. If i then only select plugin/feature X then it really shouldnt update the rest. It should stay out of that. Except if the plugin/feature that i want to update needs something else to be updated, but again i want to be informed and be able to cancel everything then..
(In reply to comment #34) > (In reply to comment #33) > > and craft the dependencies such that updates happen the way I want them to happen... > > I agree with that. And in fact, that's what I'm saying too :-) > > If I craft a dependency with a range such as [1.0.1,1.0.1] then I say > "Stability first". If I craft it as: [1.0.0,1.1.0), then I say, "I want updates > to happen". I agree, as long as we keep the current fix of not updating content that is neither part of the prov plan nor part of any dependency. So here is what I'd like to see eclipse 3.6 do: ProvPlan "update Feature A from 1.0.0 to 1.0.1" should result in - update A from 1.0.0 to 1.0.1 and all required IUs whose version changed, if dependency allows - don't update features that are unrelated to Feature A but are available for updates
(In reply to comment #38) > ProvPlan "update Feature A from 1.0.0 to 1.0.1" should result in > > - update A from 1.0.0 to 1.0.1 and all required IUs whose version changed, if > dependency allows > > - don't update features that are unrelated to Feature A but are available for > updates That makes a lot of sense. An update of A should update the transitive scope extending from A, nothing else. That transitive scope may of course overlap with other transitive scopes which in turn may or may not set additional constraints, but that's all OK. (In reply to comment #36) > Users will have the same thing for > - explicitly updated content I'm uncertain what you mean by "explicitly updated content". If I request an update of feature A, should the bundle B that it lists as a dependency be updated? The dependency in this case would allow but not enforce the update. > - content that MUST be updated due to tight dependencies My concern is that the new approach means "never update unless the dependency enforces it". This is regardless of if the feature author indicated that updates are desired or not. I'm the author of bundle B, I have an agreement with all feature authors that use my bundle where I promise not to break the API without changing the major or minor version. The whole point of that agreement is so that I can deliver bugfixes of bundle B and just increase the micro number. Those bugfixes will then be automatically consumed by all features that use bundle B. The "never update" approach eradicates that whole scenario. The updates will never take place. Only new installs (of the same feature, same version, from the same repository) will receive the updated bundle. In essence, it is no longer the feature authors decision to allow or prevent updates.
I agree that the situation is not good. In reply to comment #35 about the stability: > Now I would expect users A and B to have the same thing. Even though that this is what we should strive for, I think that we can't avoid situations will be slightly different. For example, if I install X that needs EMF and then I install Y that needs EMF and then uninstall Y, I can be left with a newer version of EMF than the one that a user who had just installed X (iff EMF had been updated on the server before Y had been installed) > About implementing transitive closure. "Forcing" an upgrade on a transitive closure will likely get us closer to the expected behaviour (and probably somewhere in between where we are today and where we were in 3.5), however I think this is deemed to put back on the table the old age debate of require vs include and how deep we should really search for updates. That said, given that a majority of features are constrained through other features we may just be fine.
(In reply to comment #40) Guess the question now is how to improve here, can 3.6 fix the optimizer such that the original bug (don't update unrelated content) remains fixed but the new behavior is less restrictive?
One of possible solution I think is to mark during slicing the IUs in the transitive closure of the root dependencies. That can be taken into account later on in the objective function to "glue" only the IUs that do not appear in that set of IUs.
This is what I was thinking as well. Let's mark this for M7.
Created attachment 161373 [details] Patch with the transitive closure idea Thomas, You may want to try the above patch that implements the idea of discarding from the transitive closure of the new elements to install the "glue" to installed IUs.
This patch was included for M6 right?
yes.
I have released a few test cases earlier this week.
*** Bug 342349 has been marked as a duplicate of this bug. ***