hi igor,
> role takeover
yes, BDB XML detects itself if the master becomes unavailable
and then elects a new one to become the master. the client (in our case the Xml
Storage Service (Xss) ) is notified of this and I think, but am not sure on
this, u can even write plugins/handlers to have a say in how the new
master is determined.
> blackboard vs. transparency
as I had said, the Xss should take care of that, so that the client,
eg. blackboard, has no worries.
having said that, a client can never be sure that an OSGi service
it once has requested and received will always be alive and kicking.
so, here a defensive programming behavior is required, but that
is part of using OSGi Services anyhow and rather should be followed already.
ie. for each new request to a service, check that it is still
available.
> SCA/Tuscany
well, I had not much time to get to know Tuscany and how well it
is capable of abstracting these things away. b/c of the many problems we had
with Tuscany I didn’t bother to spend much time on it until it was fairly
safe to use, which still is not the case. hence, I'm not quite sure on how much
of this logic we actually have to code ourselves.
> wiki
we already have it, albeit a bit outdated here http://bugs.brox.de/confluence/display/ECS/XML+data+storage.
I thought that was already transported to eclipse, but I cant find it there.
will add to my todo list.
hope that helps!?
Kind
regards
Thomas Menzel @ brox IT-Solutions GmbH
From: smila-dev-bounces@xxxxxxxxxxx
[mailto:smila-dev-bounces@xxxxxxxxxxx] On Behalf Of Igor.Novakovic@xxxxxxxxxxx
Sent: Freitag, 17. Oktober 2008 14:34
To: smila-dev@xxxxxxxxxxx
Subject: AW: [smila-dev] Sharing the same persistence storage
(xmlstorage &binary) between different cluster nodes
Hi Thomas,
I have some questions here:
If BDB XML can have several process instances that operate on the
same data (wherever it is stored), but only one instance can have write access
to it, what happens if this one particular BDB (XML storage service) instance
is no longer available? Is there some kind of automatic role takeover by other
(still operating) instances?
If not, than we have a single point of failure here L
Furthermore, even if there is some kind of role takeover, how
should a client – in our case the blackboard service – know which
XML storage service instance has to call for writing the data?
You do not really expect that the binding layer (realized e.g.
via SCA/Tuscany) should detect the call-type and (dynamically) forward it to
appropriate (writing) instance, do you?
If I have misunderstood you in any point, than please create a
wiki page with a sketch and description of your idea so that we can have a
solid base for our discussion.
Regards
Igor
Von: smila-dev-bounces@xxxxxxxxxxx
[mailto:smila-dev-bounces@xxxxxxxxxxx] Im Auftrag von Thomas Menzel
Gesendet: Freitag, 17. Oktober 2008 12:41
An: Smila project developer mailing list
Betreff: RE: [smila-dev] Sharing the same persistence storage
(xmlstorage &binary) between different cluster nodes
hi,
if i may add my 2ct here:
a) BDB Xml is cluster capable but u will always have just one
node being the write master while there are many read nodes.
this has nothing to do with not putting ur DB files on a remote
FS as marius has written. there is also a PoC project around that has tested
replication with berkely.
b) Because it is embedded we need to program the Xms Storage
Service such that it manages transparently which instance/node is the write
master and which instances just may read so the client doesn’t need to
know about this.
i had to deferred this implementation until we know more about
a) SCA/Tuscany, b/c SCA should give us transparency to remote
communication anyhow
b) configuration management in a cluster, b/c that might have
direct impact on how we need to impl. the service.
Kind
regards
Thomas Menzel @ brox IT-Solutions GmbH
From: smila-dev-bounces@xxxxxxxxxxx
[mailto:smila-dev-bounces@xxxxxxxxxxx] On Behalf Of Marius Cimpean
Sent: Donnerstag, 16. Oktober 2008 23:46
To: smila-dev@xxxxxxxxxxx
Subject: [smila-dev] Sharing the same persistence storage (xmlstorage
& binary) between different cluster nodes
Here
are few remarks based on the discussion with Dmitry, who wanted to test
clustering scenario with multiple blackboards, when blackboards running on
separate nodes needs to share the same data (same persistence storage).
Currently
no persistence service (xml and binary) support this test scenario (where the
stored data must be shared between separate cluster nodes).
1. Binary
Storage Service
The
Oracle Berkeley DB Xml does not support well remotely environment (it is an
embedded database architecture); The BDB Xml community discourage setting the
environment in remote file system. Also this depends on the operating system
where the environment is located.
"When Berkeley DB database environment shared memory regions are
backed by the file system, it is a common application error to create database
environments backed by remote file systems such as the Network File System
(NFS), Windows network shares (SMB/CIFS) or the Andrew File System (AFS).
Remote filesystems rarely support mapping files into process memory, and even
more rarely support correct semantics for mutexes if the mapping succeeds. For
this reason, we recommend database environment directories be created in a
local filesystem."
So,
it looks like for the remote situations, a number of constraints needs to
be met so the remote case work with BDB Xml environments.
Based
on this, following solutions (for Oracle Berkeley DB Xml) are available in case
of node clusters needs to share the same XML persistence storage (BDB
Xml environment):
1.
at least we can try how the BDB Xml behaves in remote situations for SMILA (we
will need to test on different os)
2.
an extra layer needs to be developed in order to "fix" the
embedded database architecture and to solve the remote calls. This layer will
be located on the same node-machine as the BDB Xml environment, so the calls to
the BDB Xml native API are locally (this will make the
deploy/installation a bit complicated ...)
Feedback
is very welcome.
__________________________
Marius, CIMPEAN
Project Manager
Numerica
SA
17 Cometei Str
400493 Cluj-Napoca, Romania
Phone
: +40 0364-101062
FAX : +40 0364-101034