Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [alf-dev] SCM Workspace discussion

I think that we should focus on A as that is the only one I think is 
realistic.  I think it is worth nothing that B3 and A are really the same 
thing.  It is just that in B3 you are using an SCM tool that has more 
capabilities.  But they are the same in as much as the SCM capabilities 
are dictating the features available to the user.

The point I raised in the meeting is that the ramification of scenario A 
is simply that the SCM service would have to be potentially running on 
multiple servers.  The reason I do not think this is an issue is that:

a)  these servers would all likely have a Web services container running 
anyway, because the scenario involved here is that something like a Build 
service is running on multiple computers.

b) If you implement B1 or B2 some service has to be running on these 
servers anyway to receive the files.  It seems like it would be less 
overhead to ask an SCM service running on that system to populate a 
workspace then it would be to ask an SCM service on another system to 
populate a workspace, and then have some second process bundle up that 
workspace and ship it off again to another system. 

Finally, I still think that the scope of B1 or B2 would be way too 
ambitious for this project.  I also think it has very real security 
ramifications.  You are essentially saying you would have some service 
that could grab a somewhat arbitrary set of files/folders and then deliver 
them to another system.  If the service is just going to use something 
like FTP or rsync to do this, then I would say we already have that 
covered.  Didn't the POC develop a service that would essentially let you 
run a script/program on the server?  If so, then someone could just have a 
script to do the transfer and use the service to run that.

I think that scenario A just makes the most sense.  Remember, we are 
really just setting a baseline set of features here.  Individual products 
are always free to provide services that can do more.  For example, I 
worked on an interface once from my product to Serena Dimensions.  And I 
recall that it had the ability to checkout to what it called a "Tertiary 
Node",  Meaning the client on system A talking to Dimensions server on 
system B could do a checkout that caused files to be delivered to system 
C.  Assuming that is true, then the ALF Service that Serena provides for 
Dimensions could expose that capability.  Someone running a CVS or SVN ALF 
service would only be able to populate a workspace on a filesystem that 
was directly accessible to the ALF service it provides.  In the worst case 
you just run this service on multiple systems which is no different than 
installing the command line client on multiple systems as you would have 
already done.

Mark


alf-dev-bounces@xxxxxxxxxxx wrote on 05/25/2006 01:13:44 PM:

> SCM usually involves a client that controls some local storage. Tools 
> operate on this local storage. In ALF, the service flow may initiate a 
get 
> (fex) but it doesn't control any local storage and we do not want to 
stream 
> file content through BPEL. We need a workspace service that can provide 
> storage for the get (fex) that the SCM can be instructed to get to and 
tools
> can be instructed to access to obtain the file content. There are a 
number 
> of models for this.
> A) ALF specifies a standard SCM client service. Participating SCMs 
implement
> this service around their existing client. This client service must run 
in a
> place where it can materialize files to a file system that can be 
accessed 
> by a particular tool (build system fex). ALF can call this service to 
> perform a get for a build tool. 
> B1) ALF provides a common Storage service that provides a variety of 
storage
> access methods (file share, ftp, other). Alf will ask the Storage 
service to
> perform the get.  The Storage service will wrap SCM clients (possibly by 

> providing a plugin service provider interface mode) to perform the 
actuall 
> get.  Tools would be told of the existence of the storage service 
> (configuration or operation argument) and be able to negotiate access to 
the
> file content using their prefered access method.
> B2) as B1 but the SCM service would be instructed to initiate the get 
and be
> given the Storage service service url as a destination.  The SCM and the 

> Storeage service would perform the file transfer using some common 
protocol 
> (TBD).  (sort of a remoted plugin)
> B3) as B1 but SCMs provide the "common service" and provide a variety of 

> access methods directly.  There is no plugin Model since each 
implementation
> is specific to tthe SCM
> C) Other models/ideas I haven't mentioned
> 
> Tim Buss,
> Serena


_____________________________________________________________________________
Scanned for SoftLanding Systems, Inc. and SoftLanding Europe Plc by IBM Email Security Management Services powered by MessageLabs. 
_____________________________________________________________________________


Back to the top