Community
Participate
Working Groups
Currently, for both the ECF IRetrieveFileTransferContainerAdapter and the ISendFileTransferContainerAdapter, calls to sendRetrieveRequest and sendOutgoingRequest create and schedule a job to do the actual file transfer. Some use cases dictate the need to be able to group file transfer jobs by creating Job subclasses that override the Job.belongsTo(Object) method, and/or control the underlying job in other ways. It would be convenient to add to the ECF filetransfer API to allow clients to control the creation and use of jobs to some degree. Two possible designs to consider: 1) Add a method to the IRetrieveFileTransferContainerAdapter: public void setJobFactory(IFileTransferJobFactory factory) where IFileTransferJobFactory would be a new interface, something like this public RetrieveFileTransferJob createRetrieveFileTransferJob(String name) 2) Add a method to the IIncomingFileTransfer Job getJob() This approach has the disadvantage that the client does not have as much control over the Job as in #1. 3) Another alternative would be to add a method to IIncomingFileTransferReceiveStartEvent: public IIncomingFileTransfer receive(OutputStream streamToStore, IFileTransferJobFactory factory) throws IOException; That would use the given factory to create the job. This approach has the advantage that the top-level API does not have to be modified, thereby keeping things simpler for client applications.
I guess I'm the first one that would make use of this so I'd be happy to start implementing it.
(In reply to comment #1) > I guess I'm the first one that would make use of this so I'd be happy to start > implementing it. > Hi Thomas...I would be fine with you implementing it. I just want to do two things: 1) Jointly choose API (1, 2, 3 or something else) that is what you need but is as simple-yet-general as possible. In thinking about it a little while writing this up, I'm still not strongly inclined about the approach (i.e. 1, 2, or 3) so I would like to settle on that first (i.e. before 2). 2) Be very careful with the refactoring of AbstractRetrieveFileTransfer and AbstractSendFileTransfer necessary to support, so that we don't inadvertently break any thing. Currently no one (that I know of) is actually creating their own providers based upon AbstractRetrieveFileTransfer and AbstractSend..., so we now is obviously the time to do this. Thanksinadvance for your help with this. Hopefully I'll see you/can repay over food/drink at EclipseCon.
I think the best solution would be a combination of #2 and #3. There are two reasons why I think it would be beneficial to get hold of the job. 1. It would make it very easy to wait for that single job in case you want a synchronous transfer. 2. You might want to register IJobChangeEvent listeners to the job or, in case the caller knows what kind of job it is (because he also registered the factory) perhaps communicate with it in other ways.
I started implementing #3 and ran into some issues. They all have solutions but I'd like your opinion before i proceed. 1. The org.eclipse.ecf.filetransfer is not dependent on the org.eclipse.runtime so it has no knowledge about a 'Job'. I resolved this by adding an interface, IIncomingFileTransferJob with the methods schedule() and cancel(). This interface can be implemented by a Job without adding any methods. 2. I assume that the receive() method that is added to the IIncomingFileTransferReceiveStartEvent must also be added to the IIncomingFileTransferReceiveResumedEvent. 3. The current FileTransferJob is a non static member class that directly accesses protected attributes and methods of the AbstractRetrieveFileTransfer. In order for the job that implements the IIncomingTransferJob to do what it's supposed to do, the interface between this job and the actual IRetrieveFileTransfer must be formalized in an interface. This is where I stopped since I realize that this needs more careful consideration. At the least, this interface must provide access to the both streams and to code that will fire the paused, resume, and receive data events. It must also be able to cancel the transfer with an exception.
Created attachment 90884 [details] Implementation proposal This patch is intended as an implementation proposal. It is fully functional (at least I think so, the URLRetrieveTest and the URLRetrievePauseResumeTest both run OK) but it is fairly extensive and although I tried to make it as non intrusive as possible, I've made some choices that no doubt will be subject to discussion.
Hi Thomas, (In reply to comment #4) > I started implementing #3 and ran into some issues. They all have solutions but > I'd like your opinion before i proceed. > > 1. The org.eclipse.ecf.filetransfer is not dependent on the org.eclipse.runtime > so it has no knowledge about a 'Job'. I resolved this by adding an interface, > IIncomingFileTransferJob with the methods schedule() and cancel(). This > interface can be implemented by a Job without adding any methods. Seems reasonable. > > 2. I assume that the receive() method that is added to the > IIncomingFileTransferReceiveStartEvent must also be added to the > IIncomingFileTransferReceiveResumedEvent. Yes. > > 3. The current FileTransferJob is a non static member class that directly > accesses protected attributes and methods of the AbstractRetrieveFileTransfer. > In order for the job that implements the IIncomingTransferJob to do what it's > supposed to do, the interface between this job and the actual > IRetrieveFileTransfer must be formalized in an interface. This is where I > stopped since I realize that this needs more careful consideration. Yes. This is where my own thinking led me too...obviously the class (or some new extracted interface) has to be exposed externally...I'm not sure of the best way to do that. > > At the least, this interface must provide access to the both streams and to > code that will fire the paused, resume, and receive data events. It must also > be able to cancel the transfer with an exception. Right...if it's an interface. Might be an abstract superclass with final and/or protected methods or something...but that's not the nicest either. >
(In reply to comment #5) > Created an attachment (id=90884) [details] > Implementation proposal > > This patch is intended as an implementation proposal. It is fully functional > (at least I think so, the URLRetrieveTest and the URLRetrievePauseResumeTest > both run OK) but it is fairly extensive and although I tried to make it as non > intrusive as possible, I've made some choices that no doubt will be subject to > discussion. > Thanks Thomas...I'll try very hard to review/comment on this attachment today 2/27, but I have some other commitments that may impinge, so it may be a day before we can move forward further. I still would like to shoot for getting this into next Monday's integration build, however.
Thomas...generally the proposal looks very good. I'm a little uncomfortable with IFileTransferJobSupport/createJob(String,IFileTransferJobSupport), although I don't yet have any concrete alternatives to propose. I would like to play around with it a little more.
(In reply to comment #8) > Thomas...generally the proposal looks very good. I'm a little uncomfortable > with IFileTransferJobSupport/createJob(String,IFileTransferJobSupport), > although I don't yet have any concrete alternatives to propose. I would like > to play around with it a little more. > Hi Thomas. I realized that in order to let clients create Job instances (and override methods like belongsTo, etc) it was necessary to create a Job subclass in the filetransfer api: FileTransferJob. The reason for this is that the *provider implementation* (not API) has to actually set the contents of the Job.run(IProgressMonitor monitor) method because all providers will differ in how/what they do while the job is actually running...i.e. some will be stream based, but others may not...so having the 'support' interface doesn't really work for all providers. FileTransferJob is very simple...it has a method: public final void setFileTransferRunnable(IFileTransferRunnable fileTransferRunnable) That allows the provider to set what is actually run just before starting/scheduling the job. then the run(IProgressMonitor) method is implemented to simply turn around and call the IFileTransferRunnable. Now, with FileTransferJob in the API, the receive can look like this: public IIncomingFileTransfer receive(OutputStream streamToStore, FileTransferJob fileTransferJob) throws IOException; Then instead of passing in a factory, the client can simply subclass FileTransferJob (rather than Job), and the subclass can override any methods they like (except, of course, setFileTransferRunnable or run(IProgressMonitor)...which are reserved for the provider impl to set. Before I go further with this, what do you think? I'll attach a patch so you can look at it...the impl works just fine. I expect to also add the methods in the pause event and the getJob() in IFileTransfer (I expect it will return a Job rather than FileTransferJob...I can't think of a reason for clients to mess with the FileTransferJob once it's actually started). You might have to update to HEAD to get this patch.
Created attachment 91030 [details] FileTransferJob-based proposal
I did consider a Job subclass but I rejected it because of the org.eclipse.runtime dependency that it would imply on the org.eclipse.ecf.filetransfer bundle. I guess you add that dependency now? I guess I can look at your patch (which I will try out immediately).
(In reply to comment #11) > I did consider a Job subclass but I rejected it because of the > org.eclipse.runtime dependency that it would imply on the > org.eclipse.ecf.filetransfer bundle. I guess you add that dependency now? > > I guess I can look at your patch (which I will try out immediately). > It does add a dependency on the jobs package org.eclipse.core.runtime.jobs (Import Package) in the API (because of FileTransferJob)...but I think that's OK...for two reasons: 1) That doesn't imply all the rest of core runtime...as jobs is in separate Equinox bundle I believe 2) There's already a dependency on the jobs API in org.eclipse.ecf.provider.filetransfer, so for the existing uses (e.g. p2) it doesn't have any incremental cost. Anyway, I don't think it's going to be a problem (for p2/platform anyway).
Scott, I have some problems applying your patch. It seems to be for the org.eclipse.ecf.filetransfer bundle only and when applied, it generates compile errors in provider.filetransfer and provider.filetransfer.ecf. Did you forget them in the patch?
Created attachment 91192 [details] FileTransferJob-based proposal Here's another shot. This includes some additions/changes since 91030, which this obsoletes. Also added URLRetrieveWithCustomJob test case.
Thomas...if structure of things in 91192 looks ok to you I would like to release these changes to HEAD. I don't mean to rush you, but would like to get the changes in on 2/29/2008 to follow project build/process rules and get theses additions into 3/3 integration build.
Scott, The patch looks fine. My objectives are all covered.
(In reply to comment #16) > Scott, > The patch looks fine. My objectives are all covered. > I've released the changes and additions to HEAD. All tests pass. Also added contribution from Thomas to ECF ip log. Created new javadocs. Thanks Thomas. Please do let us know if there are other use cases we need to support you. BTW, I'm thinking that it might be helpful at a certain stage to create some utility classes to 'wrap' a multi-jobbed file transfer (i.e. one where a separate request/job is created for n different parts of the file)...and associate the jobs with one another as a family. If your work would be helpful with that please do let us know and/or consider a contribution. Marking this bug as fixed.
closing and contributed keyword