Monday, September 16, 2013

WSO2 Deployment Synchronizer (Dep-sync) ...

What is a cluster?

A cluster is a bunch of similar computers / products that is integrated/ connected  to achieve a goal. Members of the cluster can work in many respects and more importantly they can be viewed as a single system.

Sharing artifacts across a cluster

Deployment Synchronizer enables us to synchronize the deployment of artifacts across the nodes of a cluster.

The Deployment Synchronizer makes use of two different approaches for its deployment artifact synchronization.

 1. SVN based synchronizer
 2. Registry based synchronizer

In this usecase I will describe how to setup a SVN based deployment synchronizer. Now lets move into the deeper end of this category.

SVN based synchronizer

In simple terms this means your manager-node will use a remote svn location to commit all artifacts and the slave node/s will perform a svn checkout to obtain mentioned, committed artifacts to its respective artifact directories.

Following graphical elaboration will help you to get a quick understanding of the setup we are going to develope in a while.

fig 1.0
I will use two wso2 esb servers in this example out of which one server will perform as the master-node and the other will serve as the slave-node. 

You can download the latest wso2 esb distribution from here. 

You can configure this setup either to run both ESB servers in the same machine or run one server on remote location/ computer.

Since setting up the configuration to run both servers in the same local machine is simple for this demonstration we are going to look into setting up the correct configuration to run one server in the local machine and the other in a remote machine. 

After downloading the distribution extract it onto your computer desktop or any preferred place. 

For easy reference purposes rename the distribution as wso2esb-4.7.0_master. Now move the distribution to the other remote computer and extract, rename it as wso2esb-4.7.0_slave. (For this example we will host our remote machine instance at openstack cloud ( . You may create a private openstak cloud , thereafter create an instance to host your slave-node on it.

In-order to setup the clustering we need to have a svn location. You need to create a suitable svn location and keep the credentials to access the svn with you as we need to include them in our configuration files of both ESBs.

Now lets look at the master-node. You need the go to the ....../wso2esb-4.7.0_master/repository/conf/ folder of your esb pack and open the carbon.xml file. 

Add the following entries to your carbon.xml configuration file.These are the basic properties we need to set in-order to configure svn based synchronizing. Draw your attention to each tag. Detail explanation for each are provided with.

Master-node : carbon.xml configuration file (..../wso2esb-4.7.0_master/repository/conf/carbon.xml)  

   <Enabled>true</Enabled> <!-- enables the dep sync feature-->
   <AutoCommit>true</AutoCommit> <!-- master node has the right to commit artifacts to the svn     location-->
   <AutoCheckout>true</AutoCheckout><!-- master node has the right to extract artifacts from svn-->
   <RepositoryType>svn</RepositoryType> <!-- repo type : svn -->
    <SvnUrl></SvnUrl> <!-- url of the svn location-->
    <SvnUser></SvnUser> <!-- credentials - user name -->
    <SvnPassword>vvvvv</SvnPassword> <!-- credentials password -->
    <SvnUrlAppendTenantId>false</SvnUrlAppendTenantId><!--Enable tenant-specific

Points to ponder :
We need to set value as true to <AutoCommit>,  <AutoCheckout> entries.

Now we need to change the axis.xml in-order to reflect the dep-sync features.

Master-node : axis2.xml configuration file (..../wso2esb-4.7.0_master/repository/conf/axis2/axis2.xml)  

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">

    <!--The membership scheme used in this setup. The only values supported at the moment are "multicast" and "wka"
           1. multicast - membership is automatically discovered using multicasting
           2. wka - Well-Known Address based multicasting. Membership is discovered with the help of one or more nodes running at a Well-Known Address. New members joining a cluster will first connect to a well-known node, register with the well-known node and get the membership list from it. When new members join, one of the well-known nodes will notify the others in the group. When a member leaves the cluster or is deemed to have left the cluster, it will be detected by the Group Membership Service (GMS) using a TCP ping mechanism.-->

        <parameter name="membershipScheme">wka</parameter>

       <!-- The host name or IP address of this member -->
        <parameter name="localMemberHost">xx.xx.xx.xx</parameter>

      <!-- The TCP port used by this member. This is the port through which other        nodes will contact this member-->
        <parameter name="localMemberPort">4000</parameter>

        <!--The clustering domain/group. Nodes in the same group will belong to the same multicast domain. There will not be interference between nodes in different groups. -->
        <parameter name="domain">wso2.carbon.domain</parameter>

Points to ponder :
clustering should be enabled. (enable="true")

That is all we consider for the moment on the manager-node. Now we need to seek into worker-node clustering configurations.

For worker-node there are a few additional modifications need to be configured. Here is how worker-node carbon.xml differs from the manager-node.

Slave-node : carbon.xml configuration file  (..../wso2esb-4.7.0_slave/repository/conf/carbon.xml)  

<!-- Ports offset. This entry will set the value of the ports defined below to the define value + Offset. e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445 -->



Points to ponder :
Note the <AutoCommit>false</AutoCommit> entry. This is because the  slave-node should only be able to check-out from the svn, what master-node had commited to svn.

Slave-node : axis2.xml configuration file  (..../wso2esb-4.7.0_slave/repository/conf/axis2/axis2.xml)  

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">

  <parameter name="membershipScheme">wka</parameter>

  <parameter name="domain">wso2.carbon.domain</parameter>

   <!-- The host name or IP address of this member -->
        <parameter name="localMemberHost">yy.yy.yy.yy</parameter>
        <parameter name="localMemberBindAddress">zz.zz.zz.zz</parameter>

        <!-- The TCP port used by this member. This is the port through which other nodes will contact this member -->
        <parameter name="localMemberPort">4000</parameter>
        <parameter name="localMemberBindPort">4000</parameter>


Points to ponder :
clustering should be enabled. (enable="true")
Since from openstack cloud instance we can define both public and private interfaces for a given space instance we need to add an additional entry -  "localMemberBindAddress" - which declares the public IP of the space instance to the outside world. "localMemberHost" defines the private IP of the cloud space instance.
<hostName>xx.xx.xx.xx</hostName> , here refers the manager-node IP address which is the known member of the cluster to all slave-nodes. 
<port>4000</port> refers the manager-node port , slave should establish the connection with. 

Ultimately we are now ready with the clustering set-up. Now we need to start two severs separately. To do so,

--> navigate to wso2esb-4.7.0_master/bin folder from the console and type,
--> navigate to wso2esb-4.7.0_slave/bin folder from the console and type,
                     sh -DworkerNode = true

You now can deploy artifacts to manager-node and observe they are automatically getting deployed at the worker-node. For example if you want to deploy artifacts using a cApp ( with deployment synchronizing feature all you need to do is to place your inside the folder ..../wso2esb-4.7.0_master/repository/carbonapps/0/. After deploying artifacts inside file to its relevant artifact folders the manager-node will auto commit the artifacts to the relevant folder structure in the svn location and the slave-node will perform svn chekout and deploy artifacts to its relevant artifact folders.

Hence deployment synchronizer is a hassle-free way to deploy numerous number of artifacts across several computers in a cluster at the same time.

No comments:

Post a Comment