Bug 106185 - [plan item] Update Enhancements
Summary: [plan item] Update Enhancements
Status: RESOLVED FIXED
Alias: None
Product: Platform
Classification: Eclipse Project
Component: Update (deprecated - use Eclipse>Equinox>p2) (show other bugs)
Version: 3.1   Edit
Hardware: All All
: P4 enhancement (vote)
Target Milestone: 3.2   Edit
Assignee: Platform-Update-Inbox CLA
QA Contact:
URL:
Whiteboard:
Keywords: plan
Depends on:
Blocks:
 
Reported: 2005-08-05 12:05 EDT by Mike Wilson CLA
Modified: 2006-04-24 20:56 EDT (History)
19 users (show)

See Also:


Attachments
Example of really busy update manager (28.30 KB, image/gif)
2006-04-24 16:37 EDT, Eugene Kuleshov CLA
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Mike Wilson CLA 2005-08-05 12:05:27 EDT
As the number and range of Eclipse plug-ins continues to grow, it becomes increasingly important to have 
a powerful update/install story. For instance, if more information about an Eclipse install was available 
earlier, it would be possible to pre-validate that it would be a compatible location to install particular new 
features and plug-ins. This information could also be used to deal with conflicting contributions. Update 
should also be improved to reduce the volume of data that is transferred for a given update, and PDE 
should provide better tools for creating and deploying updates. [Update, Platform Runtime, PDE]
Comment 1 Eugene Kuleshov CLA 2005-10-05 09:36:16 EDT
Please also do an end-user study and UI improvementns for update wizard and
platform manager dialog. For instance:

-- Allow to check/uncheck all selected update sites

-- Allow to create group update sites in update sites list

-- Better handling of the connection timeout or any other connection errors.
Currently update manager show "can't connect" popup dialog about 3 or 4 times
for the same update site and then do that again from the page where user suppose
to choose updates to install.

-- Platform manager should allow to enable/disable/uninstall several plugins in
one operation

-- Disable + uninstall should be single operation and should not require
intermediate restart

-- Default logic to enable "uninstall" should alway allow to uninstall features
as long as there is no custom install handler. Currently if some Eclipse site is
shared between several eclipse instances (e.g. after migration from 3.1 to
3.1.1) uninstall for previously installed features will be disabled.

-- Currently if you install some feature and then choose not to restart, then
uninstall for this feature will be disabled.
Comment 2 Pat McCarthy CLA 2005-10-05 09:46:39 EDT
I'd add to that list the ability to configure an Eclipse-base product so that
the default target for an update manager install is not the root eclipse
directory tree, but some other identified home known to the product.  

This allows the Eclipse-based product to keep its install tree clean while still
supporting dynamic additions while not requiring that the new features use the
colocation-affinity attribute to force a specific install location.  

Goal is really "anywhere but here", where "here" is the product's eclipse
install directory, but this is the default location currently proposed by the
wizard.
Comment 3 Leif Frenzel CLA 2005-10-05 09:58:58 EDT
During the 3.1 development cycle there was a plan item for investigating whether
feature relationships could be simplified, e.g. by dropping the concept of
feature includes. 

This looked a good idea to me, because feature includes seem to offer nothing
that could not also be done with dependencies, they make automated dealing with
features more complicated, and are confusing for end users (for reasons pointed
out here: http://dev.eclipse.org/mhonarc/lists/platform-dev/msg00640.html).

I wondered if there was any progress on that topic. Has this item been dropped
for 3.2?
Comment 4 Pascal Rapicault CLA 2005-10-05 10:00:45 EDT
You guys at innopract seems to have a good understanding of this whole world,
could you dedicate a person on this update problem?
Comment 5 Eugene Kuleshov CLA 2005-10-05 10:03:57 EDT
Actually I think that "include" is a good idea. However it should be possible to
do includes from other update sites.

Also, surrently if you'll install standalone feature and earlier version of that
feature had been installed as an include, that include will be disabled and its
parent will became invalid. It would make sense to allow to have multiple
feature versions enabled and runtime should choose an appropriate version based
on the dependency tree.
Comment 6 Leif Frenzel CLA 2005-10-05 10:35:31 EDT
(In reply to comment #5)
> Actually I think that "include" is a good idea.
Can you give an example where "include" allows a feature provider to do
something he can't do without them (i.e. using dependencies only)?

>However it should be possible to do includes from other update sites.
I think this is a different issue. My question is only whether there is a
meaningful difference between includes and dependencies.

A different problem is of course how to find required other features on
different sites than the originating site. But that is a problem that applies to
both includes and dependencies.
Comment 7 Eugene Kuleshov CLA 2005-10-05 11:07:16 EDT
(In reply to comment #6)
> (In reply to comment #5)
> > Actually I think that "include" is a good idea.
> Can you give an example where "include" allows a feature provider to do
> something he can't do without them (i.e. using dependencies only)?
> 
> >However it should be possible to do includes from other update sites.
> I think this is a different issue. My question is only whether there is a
> meaningful difference between includes and dependencies.
> 
> A different problem is of course how to find required other features on
> different sites than the originating site. But that is a problem that applies to
> both includes and dependencies.

In my understanding dependencies have to be already there and includes will be
installed automatically.

Inclusion mechanism should also allow to resolve version conflicts when two
unrelated features need different version of some other feature.
Comment 8 Leif Frenzel CLA 2005-10-05 12:00:55 EDT
(In reply to comment #6)
> A different problem is of course how to find required other features on
> different sites than the originating site. But that is a problem that applies
> to both includes and dependencies.
Taking that thought a round further, let us distinguish between two kinds of 
task that one might be interested in:

1) given you have a site and know a particular feature is on that site, install 
it. (That is what you can do with the Update API, or with Standalone Update, or 
using the UM UI in the Platform.)

2) given you want to install a particular feature from a site, examine it for 
pre-requisite features, find out the sites on which those are (which may be 
different sites), determine the install order, install. (If I don't 
misunderstand Eugene, that is what the UM should be able to do. Partly, this is 
already possible, but there are limits, e.g., as he points out, with includes.)

Tasks of kind 1) are the elements of tasks of kind 2), but the latter include 
much more, especially, they have to make a sort of closed-world assumption (all 
pre-requisites are on some bookmarked site). Given the current state of affairs, 
that assumption cannot be made by Update. (And given the open nature of Eclipse 
and the browse/bookmark-metaphor of the Update UI, it is questionable whether it 
is even desirable, or will ever be possible, to make that assumption.) Users may
or may not have bookmarked the necessary sites for all pre-requisites. Also, it
might be worth a thought how we can suppose users to know which sites to include
in their bookmarks. (I think something like the latter question was behind the
idea of 'discovery sites', but it seems this did not work out.)

Be that as it may, a client of the Update API/Standalone Update may be mostly 
interested in tasks of kind 1). Suppose you have an installer tool that manages 
Update Sites itself and then uses Update API or Standalone Update for installing 
from them, batching calls to the API. (Actually, we at Innoopract have such a 
tool for the Yoxos distribution. Companies may have some campus-wide
distribution mechanism that works similarly.) There is no need for users to
manage Update Site bookmarks. (Indeed, the whole point is to eliminate the
bookmark metaphor from the user's point of view.) The installer decides from
which site to install them and then sends the appropriate calls to the API. From
the point of users there are only features. If they select something to 
install, they don't need to know about dependencies, includes, and sites. They
just choose, and it gets installed.

Now, feature includes are intended to suggest to users that some features belong 
to a group which it makes sense to install together. That is fine, but I wonder 
if it should make a difference to the UM layer that is supposed to do only tasks 
of sort 1). But it actually does. If you use a single-install command with 
Standalone Update and pass a site and a feature that happens to _include_ 
another feature, you get two features installed. Pass a site and a feature that 
_requires_ another feature, you get an error. That is not plausible. The problem 
is that a command that looks like it is for doing a 1)-task is attempting to do 
a 2)-task. It is tricky enough to deal with that in installers like the one I
described.

Hope this makes sense ...
Comment 9 Gunnar Wagenknecht CLA 2005-10-05 16:01:12 EDT
(In reply to comment #6)
> (In reply to comment #5)
> > Actually I think that "include" is a good idea.
> Can you give an example where "include" allows a feature provider to do
> something he can't do without them (i.e. using dependencies only)?

Not really related to the Update Manager but PDE Build does a nice job with
building and packaging included features. 

> >However it should be possible to do includes from other update sites.
> I think this is a different issue. My question is only whether there is a
> meaningful difference between includes and dependencies.

IMHO "include" means that a specific "dependency" comes together with this
feature whereas a "plain" dependency needs to be resolved manually. 

The whole SDK is build around the "include" thing, which could be also described
as "grouping" IMHO.

The comparison might sound weird but Windows XP comes with some burning support,
i.e., it is "included", whereas this burning support "depends on" a burner,
which is not included but needs to be bought ;)
Comment 10 Dejan Glozic CLA 2005-10-05 16:31:27 EDT
Correct. 'Include' allows you to break your functionality in building blocks 
but you remain responsible for supplying these building blocks. You can wire 
them in a number of trees and surface the roots as offerings on the update 
site. The installation recursively walks the include tree and installs all the 
features.

In contrast, 'require' simply expresses dependency. You are not responsible 
for delivering that feature but it must be present for you to install.

Sometimes these concepts are blurred. If FOO requires BAR, people tend to 
think that Update should check if BAR is already installed and if not, it 
should install it prior to installing FOO in order to satisfy the dependency.
Perhaps if we add an attribute to the 'requires' clause (auto-install="true"), 
the authors of FOO can turn the flag to true and instruct Update to install 
BAR if missing. In the absence of the attribute, the behaviour would be as 
today i.e. the installation will fail if BAR is missing. Include would 
continue to be used solely for flexibility and grouping.

The idea above can only work if BAR is on the same site as FOO. Otherwise, a 
search would need to be performed (as described in comment 8).
Comment 11 Gunnar Wagenknecht CLA 2005-10-06 01:18:33 EDT
(In reply to comment #10)
> Perhaps if we add an attribute to the 'requires' clause (auto-install="true"), 
> the authors of FOO can turn the flag to true and instruct Update to install 
> BAR if missing. In the absence of the attribute, the behaviour would be as 
> today i.e. the installation will fail if BAR is missing. Include would 
> continue to be used solely for flexibility and grouping.

If such an attribute is introduced, it should be a pointer to another update
site, i.e. not "auto-install" but something like "install-from='http://...'".
Comment 12 Gunnar Wagenknecht CLA 2005-10-06 01:22:00 EDT
(In reply to comment #11)
> If such an attribute is introduced, it should be a pointer to another update
> site, i.e. not "auto-install" but something like "install-from='http://...'".

FYI, I've submitted bug 11730 to track this. 
Comment 13 Pascal Rapicault CLA 2005-10-06 10:19:28 EDT
Adding the "auto-install" tag on the require will not solve anything unless it
is being added to all features, which then defeat the point of adding it.
Moreover I don't think the producer of the feature is in position to decide
whether or not "auto-install" should be on. 
To me this is the responsibility of the product configurer (ie an update
preference) to decide whether or not require should be automatically downloaded.
Even with that, only part of the problem will be solved. 

To me the problem lies in the semantics of the "requires" which locks you in a
world where everything has to be installed by update. I think feature should
express their requirements on plug-ins and the requirement on feature should be
read as "a source where I can get the required plug-ins from" if they are not
present. The list of "required plugins" would be made of all the immediate
plug-in prereq'ed by the plug-ins included in a feature (see the example below).

This would enable scenarios where update is not the only way to deliver plug-ins
and would makes the user experience better. For example, in 3.1 it is hard to
leverage update in applications that have been deployed through JNLP because
updates needs the RCP feature to be present (see bug #).

*Example*: 
  Feature org.eclipse.gef is as follows:
    requires feature org.eclipse.platform	3.1.0
    contains plugins:
      org.eclipse.draw2d	3.1.0
      org.eclipse.gef		3.1.0
 
  In the new model it should be:
     requires feature org.eclipse.platform	3.1.0
     contains plugins:
       org.eclipse.draw2d	3.1.0
       org.eclipse.gef		3.1.0  
     requires plugins:
       org.eclipse.core.runtime (3.1.0, 4.0.0]
       org.eclipse.ui.views     (3.1.0, 4.0.0]
       org.eclipse.ui.workbench (3.1.0, 4.0.0]
       org.eclipse.jface        (3.1.0, 4.0.0]
       org.eclipse.swt		(3.1.0, 4.0.0]

Here, if org.eclipse.ui.views is missing, then the complete org.eclipse.platform
feature is downloaded, which should guarantee that all plug-ins necessary to get
views resolved will be downloaded.
If all the plug-ins required by the gef feature where "resolved", then the
download of the gef feature should be successfull.

How does that solve the initial problem:
- first it limits the number of case where you have to download a required
feature if you already have an install base
- second it keeps the necessary separation between includes and requires, but
still always automatically download required features (this could be controlled
by a preference)
- third it is complementary to the proposal to add "a location to get the
feature from"

Note that if some companies/products really want to have all the required
features satisfied, a preference could be set at the product level, however it
would be turn off by default.

Comment 14 Leif Frenzel CLA 2005-10-07 10:42:12 EDT
(In reply to comment #13)
I agree, with both points: 
> Moreover I don't think the producer of the feature is in position to decide
> whether or not "auto-install" should be on. 
> To me this is the responsibility of the product configurer (ie an update
> preference) to decide whether or not require should be automatically >downloaded. 

If update tries to find dependencies automatically, that should be a preference
option. Also, clients of the Update API and Standalone Update should be able to
decide whether they wish this to be performed.

>Even with that, only part of the problem will be solved. 
The problem that can be solved neither by the feature provider nor by the Update
Manager is that there may be a dependency that is unfulfilled (within the known
sites) at install time. That problem can only be solved by someone who builds an
update site that contains a set of features which is closed under dependency,
i.e. a third party in the game.

I usually call this the role of a 'distributor' (in analogy to Linux 
distributors): we do it with the Yoxos distribution, any provider of a company-
internal Update Site does the same thing, and (from time to time) there are
discussions at eclipse.org to do it for all eclipse.org projects.

> To me the problem lies in the semantics of the "requires" which locks you in a
> world where everything has to be installed by update. 
I think that is not necessarily a bad idea. On the contrary, the scenarios which 
I have mostly in mind are such that you _can_ assume there is an update site 
that contains features which include all required plugins. (You cannot assume a 
closed world in general, but you can in some context, namely, if you are 
distributing a controlled set of features inside an organization. Don't forget,
we're discussing under the theme of 'Enterprise ready'.)

The trouble is IMHO that the browse/bookmark metaphor in the Update UI is just 
not appropriate to installation and update procedures. That metaphor suggests a 
losely connected network of sites, of which the user is free to select some. And 
that contradicts the closed world assumption.
Comment 15 Leif Frenzel CLA 2005-10-07 11:07:02 EDT
(In reply to comment #9)
> The comparison might sound weird but Windows XP comes with some burning support,
> i.e., it is "included", whereas this burning support "depends on" a burner,
> which is not included but needs to be bought ;)
The weak point in that analogy is that usually the included feature can be used
standalone: e.g. JST (from WebTools) includes WST. You can use either WST alone
or JST, in which case you also need WST. But you cannot use Windows XP burning
support without Windows XP, although you can use the OS without burning support.

In fact, the WST/JST example is exactly analogous to the WinXP/burner part of
your example. And that suggests that they should be declared as in a dependency
relationship, not in one of inclusion.

But there is a much deeper point to this. That point has two different aspects:

1) From the point of the user, includes make installing and uninstalling
counterintuitive (see the link in item #3).

2) Especially in an enterprise context, where it is not the user but an external
tool (admin tool, scripts etc.) that performs the installation.

Such a tool needs an API/command line interface that allows it to install into 
a given Eclipse installation. Standalone update does that job nicely (although 
it has the limitation that you have to let the UM _in_ the installation do the 
work). But the primary operation of such an API is an 'install feature' 
operation, which should be:

1) atomic: call it to install a feature A, you get an installation that is equal
to the installation as it was before, with exactly one feature more (namely, A).
You don't expect such an operation to silently install any number of other things.
2) symmetric with respect to its reverse operation (uninstall): install a
feature A, then uninstall it, now you have the installation as it was before you
ran the operations. Also, ceteris paribus, it should be possible to first
install A and then succeed(!) in de-installing it. In particular, if you install
A, you would not expect that you cannot de-install because de-installation would
result in an invalid configuration. What you expect is that after de-install the
installation is as before, and it wasn't invalid then.

The install command from update core has neither the property 1) nor 2). And
both is because of feature includes.
Comment 16 Dejan Glozic CLA 2005-10-07 11:31:05 EDT
> 1) atomic: call it to install a feature A, you get an installation that is 
equal
> to the installation as it was before, with exactly one feature more (namely, 
A).
> You don't expect such an operation to silently install any number of other 
things.
> 2) symmetric with respect to its reverse operation (uninstall): install a
> feature A, then uninstall it, now you have the installation as it was before 
you
> ran the operations.

I agree that 'include' fails on your requirement (1) because installing a root 
feature may pull in a number of included features. However, I don't understand 
(2). Uninstalling the root feature should also uninstall all the included 
features unless they are also included by other features (reference counting). 
If you installed FOO that included BAR and then tried to uninstall FOO right 
away, you should be able to do it.
Comment 17 Leif Frenzel CLA 2005-10-07 11:53:09 EDT
(In reply to comment #16)
>However, I don't understand 
> (2). Uninstalling the root feature should also uninstall all the included 
> features unless they are also included by other features (reference counting). 
> If you installed FOO that included BAR and then tried to uninstall FOO right 
> away, you should be able to do it.
For example, if you install WebTools, that requires (via dependency) JEM, which
comes from the VE project, where the JEM feature is included in VE. Now suppose you 

- install Eclipse/EMF/etc. until all requirements for JEM are fulfilled,
- install JEM
- install WTP

let's call that installation 'A'.

Intuitively, if I as a user go to the VE update site and install VE, then decide
I don't want it, I should be able to de-install. I expect that I am, after
de-installation, back at state A, which was a valid state.

But actually the UM refuses to uninstall VE, because it would invalidate the
configuration. That is correct, of course, because VE includes JEM, uninstall
would remove both, and then the dependency from WTP to JEM is unfulfilled.

More generally, any feature could declare another feature as included, and when
it gets uninstalled, it would either take the included feature with it into the
abyss (point 1), or it is no longer possible to uninstall it (point 2).
Comment 18 anatoly techtonik CLA 2005-10-19 10:39:39 EDT
Another update enhancement is to increase update speed, reduce servers load and
amount of traffic passed by decreasing amount of downloaded metainfo about features.

see bug 83741 for details
Comment 19 Eugene Kuleshov CLA 2005-10-19 11:20:52 EDT
(In reply to comment #18)
> Another update enhancement is to increase update speed, reduce servers load and
> amount of traffic passed by decreasing amount of downloaded metainfo about
features.
> see bug 83741 for details

Currently, update component request site.xml component (and probably feature
component) from the web server about 3 or 4 times. Some of those calls come from
the update wizard (the UI), which I believe is fundamentally bad idea.

Besides that, you can configure to check for new updates, and if this background
task will find any updates it would suggest to install them. However if you
decide to not to (e.g. can't restart eclipe at this moment) the result of this
scanning would be lost. I think that update manager should actually download
these updates and provide some way to install them (e.g. non modal popup dialog
with buttons like "Yes"/"No"/"Remind me later"). So, it probably should keep the
job in process view, so user can click on its hyper link to activate
installation of the prefetched features.
Comment 20 Philippe Ombredanne CLA 2005-10-19 12:28:04 EDT
Regarding update download speed, I am currently experimenting with an
implementation of HTPP that supports suspend/resume as well as multithreaded
HTTP downloads to patch the updte manager. This does not address mirror load,
but download speed, and interrupted downloads. 
Once this will be working, I may play with Bittorrent.
I will share the code here to get some feedback.
Comment 21 Peter Manahan CLA 2005-10-19 14:00:19 EDT
Regarding the multi-threaded http downloads. I believe this was tried in v3.0 
and was found to have no improvements(the source may even be in the update cvs 
somewhere). My belief (and I could be off) is that http servers typically 
allocate one socket per client connection and that socket is throttled. This 
is a performance optimization servers make to be able to scale. So to get any 
improvement in download speed you need to hit multiple http servers (mirrors) 
to get multiple pipes. Not sure if in the case of a http server farm that you 
potenitally couldn't get multiple connections via redirects. The same isn't 
true for ftp servers where you can open multiple connections to the same 
server. Any info you find out one way or the other would be useful :-)
Comment 22 Philippe Ombredanne CLA 2005-10-19 15:45:48 EDT
Pete Thanks!
I will dig the CVS... I love CVS archeology..
Anybody with a starting pointer like a tag, revision, branch?
The idea is to multi-thread from multiple mirrors when they are available.
It seems to give decent results at first.
Comment 23 Lothar Werzinger CLA 2005-12-02 15:35:55 EST
At least it would be nice to connect to some/all the different update sites in parallel. Currently if an update site is not responding the update manager waits for the time out, where it could retrieve information from other update sites (of other plugins)
Comment 24 Dejan Glozic CLA 2006-04-24 14:52:17 EDT
We have provided enough Update enhancements in 3.2 to commit this plan item for 3.2:

- Better HTTP support and cashing
- Server-side mirror support based on geography, time zone and rating
- Support for loading install handlers from multiple JARs
- Enhancement to the delta feature handler to be able to compute deltas within JARs
- Verification and installation of bundles can now be done in the background
- Multple-selection operations in the UI now supported (configure/unconfigure/uninstall)
- Update site enhancement utility - creates digests for faster browsing and searching and packs plug-in JARs using pack200 for faster downloads.
Comment 25 Eugene Kuleshov CLA 2006-04-24 15:11:18 EDT
(In reply to comment #24)
> We have provided enough Update enhancements in 3.2 to commit this plan item for
> 3.2:

I wonder what is the criteria for "enough"? 
How about adressing UI-related ussues hightlighted in comment #1?

> - Better HTTP support and cashing

I still see 2 or 3 connections to the same update site when choosing features for update. Another remote connection is happening when update UI is verifying validness for the choosen set of features. Note that all of that is happening in the UI thread.

> - Enhancement to the delta feature handler to be able to compute deltas within
> JARs

How about creating those delatas?
Comment 26 Dejan Glozic CLA 2006-04-24 16:06:00 EDT
> I wonder what is the criteria for "enough"?
- We have created this plan item to cover various Update enhancements that we were going to make in 3.2. It does not imply that no further enhancements are going to happen in 3.3.

> How about adressing UI-related ussues hightlighted in comment #1?

- They will be looked into in 3.3. As a general statement, we have done what we could within the limitations of the current code base, java.net networking constraints and limited cycles we could apply to this item. There is no reason to rush to include additional fixes in 3.2 - there will be more work happening in this area in 3.3 time frame.
Comment 27 Eugene Kuleshov CLA 2006-04-24 16:13:47 EDT
(In reply to comment #26)
> > I wonder what is the criteria for "enough"?
> - We have created this plan item to cover various Update enhancements that we
> were going to make in 3.2. It does not imply that no further enhancements are
> going to happen in 3.3.
> 
> > How about adressing UI-related ussues hightlighted in comment #1?
> - They will be looked into in 3.3. As a general statement, we have done what we
> could within the limitations of the current code base, java.net networking
> constraints and limited cycles we could apply to this item. 

Multiple connections have nothing to do with java.net constraints. Update manager could of just pull all required metadata and then allow user to select all the features without connection to remote sites.

> There is no reason
> to rush to include additional fixes in 3.2 - there will be more work happening
> in this area in 3.3 time frame.

There is very critical reason. Not all users could use milestone releases, so if those changes will not make it for 3.2 those users will have to wait another year. 

I guess eclipse.org developers don't see this as much as the issue since they are working in the intranet with very fast connections to the update sites. However it not the case for most of the 3rd party update sites.
Comment 28 Dejan Glozic CLA 2006-04-24 16:26:16 EDT
We are hoping that digest support we are currently setting up for Eclipse.org and making available for other sites will make things much better. In essence we are parsing all the features and creating one digest file that allows Update client to fetch all the data required for browsing and searching by opening a single connection. In addition, Pack200 will cut the payload size in half. These two enhancements will be drammatically affect 'real world' update sites.
Comment 29 Eugene Kuleshov CLA 2006-04-24 16:37:41 EDT
Created attachment 39348 [details]
Example of really busy update manager

(In reply to comment #28)
> We are hoping that digest support we are currently setting up for Eclipse.org
> and making available for other sites will make things much better. In essence
> we are parsing all the features and creating one digest file that allows Update
> client to fetch all the data required for browsing and searching by opening a
> single connection. In addition, Pack200 will cut the payload size in half.
> These two enhancements will be drammatically affect 'real world' update sites.

I am afraid this is a wrong direction. Most time consuming operation is the network connection. So, update manager should cut down number of connections to the same site or at least move it out of the time window when user is interacting with the UI. 

Personally I haven't had troubles with update manager downloading archives from the update sites, but wizard UI is really clunky.

I am attaching example of my update manager with few dozen update sites in bookmarks. It takes about 30 seconds only to open this wizard before running any updates.

If any of the selected update sites has errors I am getting few popup dialogs after wizard UI come back from the scanning update site. Those popup dialogs appears with 10..20 seconds intervals while the rest of Eclipse UI is blocked. I guess this is what you reffer as limitation of java.net. However the point is that there should not be any connection happening while interaction with the UI. All that data should be prefetched before that.
Comment 30 Branko Tripkovic CLA 2006-04-24 16:45:17 EDT
Eugene,
cutting the number of connections is exactly what digest does. As it stands now if you have X features on your update site you will have to open X connections + some 2-3 other ones for site.xml and mirros.xml. With digest this number of connections will be constant (3-4) no matter how many features are hosted on the update site.
Comment 31 Dejan Glozic CLA 2006-04-24 16:51:03 EDT
> However the point is
> that there should not be any connection happening while interaction with the
> UI. All that data should be prefetched before that.

This is an unfair restriction on the Update UI. I don't see this logic applied to the Team UI. In the CVS view, connection to the remote server is made on demand i.e. delayed until users do something (for example, expand a node in the CVS tree). Depending on the state of the network, the expansion can feel 'clunky' in that I need to wait for a while to get the children of the node to show up. I don't blame the CVS UI for the network latency though (and I use it daily for Eclipse development).

Comment 32 Eugene Kuleshov CLA 2006-04-24 16:58:52 EDT
(In reply to comment #31)
> This is an unfair restriction on the Update UI. I don't see this logic applied
> to the Team UI. In the CVS view, connection to the remote server is made on
> demand i.e. delayed until users do something (for example, expand a node in the
> CVS tree). 

This is what I call unfair comparison. CVS plugin can't prefetch the entire repository content, so CVS Repositories view is using lazy initialization. However if you look at Synchronize view it does not have any remote connections and very responsive for the user interfaction. This is what users would expect from the update manager as well. 

Also, I'd say that size of metadata update manager have to prefetch is less then size of synchronization diff loaded by the Synchronize view for more or less active projects on a large workspace (e.g. eclipse.org modules).

> Depending on the state of the network, the expansion can feel
> 'clunky' in that I need to wait for a while to get the children of the node to
> show up. I don't blame the CVS UI for the network latency though (and I use it
> daily for Eclipse development).

You can't compare "browser" view with "synchronize" view. But even then, CVS Repository view does not do any fetch in the foreground and it does not block the rest of Eclipse UI with modal dialogs.
Comment 33 Eugene Kuleshov CLA 2006-04-24 16:59:10 EDT
(In reply to comment #30)
> cutting the number of connections is exactly what digest does. As it stands now
> if you have X features on your update site you will have to open X connections
> + some 2-3 other ones for site.xml and mirros.xml. With digest this number of
> connections will be constant (3-4) no matter how many features are hosted on
> the update site.

Sorry, I probably should of been more clear. I meant connections to the different sites. Besides it doesn't really matter how many resources you have to scan on the same or multiple sites if that would be happening as a background job. so, user could continue working in the IDE (e.g. update something from version contriol, etc). Unfortunately in current implementation not all of that is a background task.

It would also make sense to use keep-alive mode when retrieving multiple artifacts from the same site, so using single connection would save time on hand shake. If I am not mistaken jakarta commons http client support that.
Comment 34 Dejan Glozic CLA 2006-04-24 17:11:10 EDT
These are all good suggestions that Update team will take into acount while planning 3.3.
Comment 35 Eugene Kuleshov CLA 2006-04-24 17:17:47 EDT
(In reply to comment #34)
> These are all good suggestions that Update team will take into acount while
> planning 3.3.

Here is another good one. Eat "your own dog food". For example make all nightly, integration and milestone platform builds avaialable trough update manager.

Comment 36 Jeff McAffer CLA 2006-04-24 20:56:40 EDT
that is an excellent suggestion that has been discussed many times before.  As I am sure you can appreciate, we have finite resources and have to do our best to allocate them.  Please don't dump on the team that has been working hard with the resources they have.

To date Update has been largely viewed as a product/release delivery mechanism.  It is only now that we have proper plugin and feature version numbers that we can even consider using Update at development time and eating our own dog food.  With that under our belt the chances of using Update manager every day are more realistic.

Having said that, we will still be working with high speed, low latency connections (for the most part) and only a few sites so the community needs to pitch in here.  Pointing out usecases and scenarios is a great help.  Concrete suggestions for design/coding improvements are even better.  Of course, working patches rock.