Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [virgo-dev] [gyrex-dev] Gyrex and Virgo

Hi Gunnar

Thanks for your response. Comments inline.

If the multiplicity of Virgo deliverables causes any confusion, please see this page:


and the section "Virgo Runtime Deliverables" (including Figures 9 & 10 which provide a quick summary) on pages 19-20 of the Virgo white paper:


Regards,
Glyn
On 13 Apr 2012, at 08:42, Gunnar Wagenknecht wrote:

Hi Glyn,

Am 05.04.2012 14:16, schrieb Glyn Normington:
Perhaps the simplest starting point would be to run Gyrex with Virgo
Nano (which includes standard Equinox launch setup with logback-based
logging, Apache Gogo console, and nice p2 integration). I guess this
would take a couple of hours to get going, for someone with the
appropriate Gyrex skills.

Our minimum requirement is Equinox. Thus, getting running on Virgo Nano
should be fairly trivial. I could imagine that it's actually just
unzipping our bundles into a Virgo Nano deployment folder (dropins?).
I'm not sure what the deployment startegy is using multiple regions, though.

Virgo Nano and Nano "full" have a single region, to keep things simple. There is a hot deployer (currently broken - bug 375965): you just put bundles to be installed/started in the pickup directory.

Virgo Kernel and higher (Virgo Tomcat and Jetty Servers) have multiple regions, but you can still just put bundles to be installed/started in the user region in the pickup directory.

Virgo Nano and Nano "full" sound like the best fit for Gyrex since they support runtime p2 provisioning into the kernel region (Kernel-based distributions only support initial provisioning of kernel+user regions and then runtime provisioning of the kernel region since p2 has no region support.)


We currently include Jetty 8.1.2 into our distributon. Thus, if there is
a different Jetty version in Virgo Nano and the Equinox Jetty based
HttpService then there might be some issues to solve. But that might
just be getting the bundle start levels right.

The smallest Nano distribution has no web content. Similarly for the Kernel. The Nano "full" distribution is Tomcat based, as is Virgo Tomcat Server, so there should be no clash there. Virgo Jetty Server is where clashes are likely.


In order to "start" Gyrex we make heavy use of the Equinox Application
Admin which is an extension of the OSGi Application Admin based on the
Equinox Extension Registry. There is a single application that you have
to start (org.eclipse.gyrex.boot.server). This application does all the
rest. We rely heavily on lazy activation. Thus, the amount of bundles
that must be started is pretty low.

Sounds good. Virgo tends to aggressively start lazy activation bundles since we find that many people don't understand lazy activation (should only be used for bundles which will be activated by class or resource loading), but I don't think that's a particular issue for Virgo+Gyrex.


In config.ini:
osgi.bundles=reference\:file\:org.eclipse.equinox.simpleconfigurator_....jar@1\:start

In bundles.info (default level is 4):
org.eclipse.equinox.app (4)
org.eclipse.equinox.common (2)
org.eclipse.equinox.ds (2)
org.eclipse.equinox.event (2)
org.eclipse.osgi (-1)

These edits could easily be applied to Virgo and the additional bundles placed in the plugins directory. This would install Gyrex into the kernel region, which is the only region for Nano and Nano "full".


I think there could be some interesting usecases for Gyrex clusters of
Virgo Nano or Virgo Kernel instances. (Nano and the kernel are runtimes
without specific servers pre-installed.) It might then be possible to
use Apache CXF (also based on ZooKeeper) as a distributed OSGi
implementation or to use messaging, e.g. in the form of Eclipse Paho
(for MQTT pub/sub) or RabbitMQ (for AMQP).

I haven't deployed CXF in a Gyrex stack yet. We typically use JAX-RS for
providing RESTful services and JAX-WS for consuming external services.
ECF also provides a remote OSGi services implementation. I played with
it a while back but we didn't find a need for it in the core Gyrex
stack. That doesn't mean that there won't be one in the future.

I'd like to see ECF distributed OSGi running on Virgo, but I haven't found suitable runtime-oriented documentation. The ECF docs seem geared towards the Eclipse IDE. But mixing and matching other components with Gyrex on Virgo is not the initial use case and can be deferred.


Currently, Gyrex is designed to not do direct communication between
nodes but to use ZooKeeper as a service for coordination (simple message
queuing, distributed locks and notifications) and sharing configuration
data (OSGi/Eclipse preferences). Thus, we don't use remote OSGi
services. The reason for avoiding direct communication between nodes is
that we never know much about the target cluster environment. Especially
across data centers (sometimes even across racks within the same data
center) it's a non-trivial task to implement.

Seems reasonable.


It should also be possible to move Gyrex up the Virgo stack ([1]). Note
that our server deliverables have Tomcat as well as Jetty variants, so I
wonder how tied to Jetty Gyrex is?

We do have a separation in between that allows us to connect to other
containers than Jetty. Thus, Tomcat is possible. The thing is that the
current Jetty integration is very tight. For example, we don't assemble
a full WAR and deploy that (my understanding is that Gemini Web is doing
this). But we deploy a "root" handler (custom Jetty handler) that allows
to dynamically register (and unregister) applications under multiple
different URLs (combination of virtual hosts and paths). The
applications are actually very lightwight and similar to the OSGi
HttpService itself.

In terms of administration we allow to configure all Jetty instances
(ports as well as SSL options) within a cluster. This is very specific
to Jetty and has to be rewritten for any other container. Certificates
are stored encrypted in ZooKeeper as well. This can be re-used for any
container, though.

Thanks. Sounds like too much work for the present. Let's aim for Nano+Gyrex initially as that should be trivial to get going.


I'm interested in the layering of Gyrex as we could then work out what
combinations of Gyrex and Virgo bundles would work together or overlap.

We also have the notion of a "kernel" (or "core"). It contains the
bootstrapping logic, the clustering components, the context runtime core
as well as helpers for monitoring (metrics, debugging, logging). Within
that kernel I see some overlapping in terms of monitoring. The
bootstrapping is different because we don't rely on bundle start but on
the OSGi applications.

The clustering part offers APIs to participate (eg. a simple queuing
service, a distributed locking service, OSGi/Eclipse preferences). It
does not expose ZooKeeper APIs but sometimes it's necessary to expose
behavior that typically happens in a distributed environment but not
locally (eg. concurrent modification of preferences during flush).

Part of the clustering part is also a provisioning component that uses
p2 to install bundles locally on nodes based on instruction persisted in
ZooKeeper. Thus instructions could be also persisted elswere (eg. in a
database). But some refactoring would be necessary to extract a generic
storage API.

We setteled on SLF4J as logging API and ship Logback as an
implementation. Gyrex generally avoids to store configuration
information locally but some bootstrapping is necessary. We decided on
using the instance location only to allow re-using the same
installation/configuration multiple times for different instances. This
is similar to what Eclipse does with different workspaces. Thus, (by
default) things like ZooKeeper connection settings and Logback
configuration are read from the instance location. Log files and other
local data is also written there (by default).

The "context runtime" is what we use to implement multi-tenency. It's in
the kernel because that also affects some of the components in there.
For example, you wan't to be able to diagnose problems related to a
specific tenant. I'm curious about what Virgo plans to implement with
sub-systems.

Application and feature subsystems should work pretty much like Virgo's current support for scoped plans (or PARs) and unscoped plans, respectively. They should play nicely with Virgo's current diagnostics.

Composite subsystems may need some tweaks in order to share kernel diagnostics, but that's not something I'd thought about until now, so thanks for raising it.


The "Web/Jetty" component is layer on top of that and can be used
separately. It likely completely overlaps with Virgo and/or Gemini Web.
But we are aware of that and looking for possibilities allow at least
co-existance.

Ok.


The "repositories" component allows to decouple application code from
the actual data source using the "context runtime". For example,
application code asks the context for a repository to store content of
type XYZ (instead of asking for a data source with an id ABC). The
content type decided about the repository type (eg. JDBC data source or
Solr index) es well as the expected content in the repository (eg. a
relational database needs to be prepared with a specific database schema
version). I don't think there is overlap there. That's not part of the
"Gyrex Server" download but provided as an "add-on".

There are also a couple of other (optional) components that Gyrex
provides. A "processing" component allows to queue jobs (Eclipse Jobs
API) which are then executed by worker nodes. A "search" component
offers an integration of Solr. There are also a few more integration
"add-ons" for MongoDB, EclipseLink (Gemini JPA), Jersey JAX-RS and more.

Nice. Users can then probably pick and choose what to add to Virgo+Gyrex.



-Gunnar


--
Gunnar Wagenknecht
gunnar@xxxxxxxxxxxxxxx
http://wagenknecht.org/


Back to the top