Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [stp-pmc] Deployment Framework Doc


Hi Dan,

Some inline comments from both myself and Rob Cernich ..

I refer to requirements that we need to support in STP, although these have not been publically declared at this point, the following is a list to get the ball rolling..
  • clean separation between design metadata and server runtime
  • allow design metadata to be packaged as required for the target server
  • package definition
  • package configuration
  • server introspection
  • server capability support
  • repeatable deployment definition
  • deployment to one or more remote servers concurrently
  • remote server management hooks
  • support for deployment across multiple different runtimes
  • and more...

Regards
Karl


> 1)  The first issue that I want to bring up is around the technology
> being used for the connections (i.e., using DTP).
>
> Question
> Have you reconciled how the server connections align with the WTP
> server tooling and project facet support?
>
> Issue
> This approach may be risky since it aligns STP on top of the Data
> Tools Platform for server integration; all of the existing J2EE
> types (*.ear, *.war, EJBs, etc) are aligned on the WTP Server API.
> The DTP and WTP API have some similarities (for instance DTP has a
> loosely defined concept of a "Technology Type" that is mapped to a
> "Server Type"; this is conceptually similar to the Project Facets in
> WTP). Following the current plan of action may make the J2EE
> integration story much more difficult; for instance, will we have to
> duplicate all of the J2EE Server integration on these new "DTP-
> flavored" "Server Connection Profiles" ?


WTP will be adopting DTP post-Callisto as a replacement for the RDB functionality.  This means WTP will have a direct dependency on DTP.

As for "technology types," DTP does not define these.  These are defined through the deployment framework.

I agree that an analysis needs to be done to see where duplicate, differing solutions exist.  Ideally, these issues would be worked out between the three projects.


> I think re-using DTP for Database integration makes alot of sense; I
> don't think reusing any Server integration for any sort of ESB or
> Application Server makes sense. With the current deployment proposal
> we may have alot of work down the road; either by us (or someone
> else in STP) having to duplicate all of the J2EE server integration
> OR by STP having to port everything over from their DTP Connection
> Profile scheme to the WTP Project Facets. Either way, it's not a good story.


The connectivity framework within DTP is not specific to DB servers, any servers can be supported by the frameworks.

Could you please elaborate?  Does WTP contain server implementations for ESBs?  Does it contain server implementations for SCA containers, JBI, etc.?  I can understand the overlap with respect to JEE, but I don't think that will be as large an effort as you think (primarily because, this would only need to be done once, delegating to the WTP APIs, not for every server implementation).  (Internally, we have been able to successfully bridge these two frameworks with minimal effort.)

Also, has it been decided that STP will apply WTP project facets to services projects?

Once again, I agree that there is some investigation that needs to be done here.


> In addition, the usability story may not be ideal if we go with DTP
> as existing J2EE developers who use WTP (or WTP based products) will
> be used to a particular way of creating application server artifacts
> and associating those with application servers. Existing WTP users
> who pick up STP will have to follow a different path (via the
> Database Server configuration wizards) in order to create ESB or SCA
> deployable projects.


Post-Callisto these developers will be using the connectivity framework provided by DTP, at least for creating DB connections, but hopefully for more if we can ensure interoperability.  Also, STP is targeting more than just JEE developers.

> This proposal *builds* on the DTP Connection Profile scheme; which
> means that STP will be building and supporting yet another server
> integration/deployment story with this proposal. Clearly WTP is the
> platform for Application Server integration; the API is there,
> tested, and for better or worse it is the standard.


I agree that WTP is the standard for application server integration.  However, STP will be supporting more architectures than application servers, but the intention would be to ensure that the WTP model is supported by the connection frameworks.

I am not sure I would agree with the statement regarding the WTP apis being a standard, and even if it is, we need to ensure that whatever we use meets the requirements that we set for it rather than just using whatever happens to be available.

I think it is important to point out that the connectivity framework within DTP was designed to allow users to create connections to any type of server, regardless of deployment capabilities.  The deployment framework was built atop connectivity to provide deployment capabilities to connections where applicable. The real point here is that the deployment framework is far broader in capability than that currently supported by WTP - I btw, would have no issue with this being moved into WTP at some point, but for the time being, we need to ensure that we can support the necessary requirements for STP.


> 2) I don't see a connection between the definition of a "Package"
> and the core model ModuleComponent.
>
> Question:
> Is there a scenario that shows how a Module is constructed,
> assembled into a Subsystem, and then deployed?  
>
> Your PackageProfile and Package must be closely linked with the
> implementations used for ModuleComponents (hopefully this will
> change to simply Component) defined within a Subsystem.  It is then
> the Subsystem that you want to deploy and create Packages for each
> implementation defined by the ModuleComponents.
> I would like to see the scenario that ties the construction and
> deployment efforts together so there is a seamless story and vision
> that we are presenting.

Our thinking would be that a ModuleComponent would be aligned with a package profile since this artifact defines the services and implementations which are to be packaged and deployed (and, by extension a Subsystem, since this is also a deployable unit).  Because the structure of the deployable package is dependent upon the architecture of the target server, the construction of that package would be handled by a package constructor associated with a deployment driver for that server's architecture.  In this scenario, the design metadata can be created without regard to the architecture of the target server (e.g. SCA, JBI, ESB, etc.; obviously, this is a little overly simplistic, but probably covers > 80%).  (It should also be noted that even if a server supports a logical package type (i.e. package profile), but that package would include artifacts that are not supported by the server, the server can report this back through the framework and that package profile would show as not supported by the server (i.e. the user would be prevented from deploying the package to that server.).)

I do not think that the assessment that the package profile and package are closely linked to the implementations is correct.  The package profile references the implementations.  The package references a package profile and is closely linked to the targeted server architecture.  (Based on the fact that we are using SCA models to represent assemblies and implementations, and that these are not specific to any particular implementation type and that the SCA specification does not specify anything with respect to package structure or deployment.)

In short, the deployment framework was designed to allow a clear, clean separation between design metadata and package structure.  This approach is advantageous for STP since one of its charters is to support SCA runtimes and SCA does not specify a common package structure.  Given that, this approach allows adopters to create package constructors specific to their runtime architecture without having to modify design tooling.



> 3)  On slide 10 you indicate that there will be a Deployment File editor.
>
> Question:
> Will this editor allow for the definition of configuration elements
> on the application servers which are in support of the packages
> being deployed (e.g.., data sources)?

The deployment file editor is a simple editor which allows packages (logical or physical) to be targeted for deployment to specific servers.  This editor also allows logical packages to be reconfigured for deployment to a specific server, so long as configuration is supported by the package extension.  This editor also allows multiple packages to be targeted to multiple servers, deploying them all as a single unit of work.

As for the definition of configuration elements as applies to application servers, this capability is supported through the package and deploy driver extensions depending on what type of package, logical or physical, is being deployed.

Logical packages can be configured directly within the deployment file editor (this is accomplished through a configurable package extension, which simply adds configuration capabilities to a logical package).  The package constructor associated with that logical type and deployment driver for the targeted server is used to validate the package as applicable to the target server (e.g. can verify that certain capabilities are installed on the target server which are required by the package).

For physical packages, the deploy driver provides verification prior to deployment (this is also true of packages constructed from logical package profiles).  The physical package extension also provides a certain amount of validation.  For example, in a JEE environment an EAR file containing vendor specific configuration information could be identified by a generic EAR package extension and by a vendor specific package extension as supported, while other vendor specific package extensions might choose not to identify the package as supported.  (Note, I'm just using JEE as an example since it is pretty well understood.)

Overall, I agree that there is overlap between the proposed framework and the framework provided by WTP.  However, I believe this framework is more flexible than the framework provided by WTP.  Furthermore, I think the two frameworks are fairly compatible with each other and could peacefully coexist.



Daniel Berg <danberg@xxxxxxxxxx>
Sent by: stp-pmc-bounces@xxxxxxxxxxx

04/13/2006 12:00 PM

Please respond to
STP PMC list <stp-pmc@xxxxxxxxxxx>

To
stp-pmc@xxxxxxxxxxx
cc
Subject
Re: [stp-pmc] Deployment Framework Doc






Hi Karl,


This is good to see.  I do have some comments.


1)  The first issue that I want to bring up is around the technology being used for the connections (i.e., using DTP).


Question

Have you reconciled how the server connections align with the WTP server tooling and project facet support?


Issue

This approach may be risky since it aligns STP on top of the Data Tools Platform for server integration; all of the existing J2EE types (*.ear, *.war, EJBs, etc) are aligned on the WTP Server API. The DTP and WTP API have some similarities (for instance DTP has a loosely defined concept of a "Technology Type" that is mapped to a "Server Type"; this is conceptually similar to the Project Facets in WTP). Following the current plan of action may make the J2EE integration story much more difficult; for instance, will we have to duplicate all of the J2EE Server integration on these new "DTP-flavored" "Server Connection Profiles" ?


I think re-using DTP for Database integration makes alot of sense; I don't think reusing any Server integration for any sort of ESB or Application Server makes sense. With the current deployment proposal we may have alot of work down the road; either by us (or someone else in STP) having to duplicate all of the J2EE server integration OR by STP having to port everything over from their DTP Connection Profile scheme to the WTP Project Facets. Either way, it's not a good story.


In addition, the usability story may not be ideal if we go with DTP as existing J2EE developers who use WTP (or WTP based products) will be used to a particular way of creating application server artifacts and associating those with application servers. Existing WTP users who pick up STP will have to follow a different path (via the Database Server configuration wizards) in order to create ESB or SCA deployable projects.


This proposal *builds* on the DTP Connection Profile scheme; which means that STP will be building and supporting yet another server integration/deployment story with this proposal. Clearly WTP is the platform for Application Server integration; the API is there, tested, and for better or worse it is the standard.


2) I don't see a connection between the definition of a "Package" and the core model ModuleComponent.


Question:

Is there a scenario that shows how a Module is constructed, assembled into a Subsystem, and then deployed?  


Your PackageProfile and Package must be closely linked with the implementations used for ModuleComponents (hopefully this will change to simply Component) defined within a Subsystem.  It is then the Subsystem that you want to deploy and create Packages for each implementation defined by the ModuleComponents.

I would like to see the scenario that ties the construction and deployment efforts together so there is a seamless story and vision that we are presenting.


3)  On slide 10 you indicate that there will be a Deployment File editor.


Question:

Will this editor allow for the definition of configuration elements on the application servers which are in support of the packages being deployed (e.g.., data sources)?


Regards,
Dan
_______________________________________________
stp-pmc mailing list
stp-pmc@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/stp-pmc


Back to the top