Introduction:
High availability is always an important topic, which is often thought at end of the project or even sometimes at the time of downtime/disaster.
While it is challenging to set up high availability environment, as you have to spare time and money to complete; here is my attempt to provide you with a prototype that you can set-up in matter of few hours using VMWare.
This article helps to quickly, complete proof of technology or prototype for high availability environment with active and standby MQ/IIB instances.
It covers whole nine yards of this topic i.e. from installation of VMware to testing of fail-over from client.
1. Installation of CentOS:
Follow Installation of CentOS procedure twice; first to create VMWare instance for Active and then for Standby server.
We will use these two instances of CentOS for active and standby installation/configuration on MQ and IIB.
1.1 Download CentOS .iso installable file.
http://www.centos.org/download/
Note: I used CentOS6.4, check compatibility when you download CentOS.
1.2 Download VMWare Player from VMWare.
www.vmware.com
Downloads-> (Under Free products, click Player)
1.3 Player->File->New Virtual Machine>
Open VMPlayer and create new ‘Virtual Machine’.
1.4 Choose install image.>
1.5 Choose Guest Operating System
1.6 Name virtual image
( For primary: CentOS 64-bit primary)
( For Secondary: CentOS 64-bit Secondary)
1.7 Specify disk capacity
1.8 Create Virtual Machine.
1.9 Start Virtual Machine which is created.
Use default user to log in.
1.10 Open terminal
2. Installation of MQ and IIB on Active and Standby server:
Follow these instructions to install MQ and IIB on both Active and Standby server.
Before you proceed with the procedure, download MQ and IIB installable from IBM website.
I used version MQ 7.5.0.1 and IIB 9.0.0.2.
2.1 Installation of MQ:
1. Login as root
2. Uncompress and untar “WS_MQ_LIN_ON_X86-64_V7.5.0.1_EIM.tar
“tar –xvf WS_MQ_LIN_ON_X86-64_V7.5.0.1_EIM.tar”
3. “./mqlicense.sh -text_only” Enter ‘1’ to accept.
4. Install in default directory
“rpm -ivh MQSeriesRuntime-*.rpm MQSeriesServer-*.rpm”
“rpm -ivh MQSeriesJava-7.5.0-1.x86_64.rpm MQSeriesJRE-7.5.0-1.x86_64.rpm”
2.2 Install IIBv9
1. Login as root
Open root terminal, ref 1.10 Open Terminal.
2. Uncompress and untar “IBM_INTEGRATION_BUS_V9.0.0.2_LINU.tar”
“tar –xvf IBM_INTEGRATION_BUS_V9.0.0.2_LINU.tar”
3. cd integrationbus/sample-scripts
4. edit response.properties file and accept License.
LICENSE_ACCEPTED=TRUE
5. Create tmp directory and export IATEMPDIR variable
[root@localhost sample-scripts]# vi response.properties
[root@localhost sample-scripts]# mkdir -p /opt/ESB/installables/tmp
[root@localhost sample-scripts]# export IATEMPDIR=/opt/ESB/installables/tmp
[root@localhost sample-scripts]#
6. Install IIB
[root@localhost sample-scripts]# /opt/ESB/installables/IIBv9002/integrationbus_runtime1/setuplinuxx64.bin -i silent -f /opt/ESB/installables/IIBv9002/integrationbus_runtime1/sample-scripts/response.properties
3. Configuring NFS on Active and Standby server:
Now that we have got two VMWare instances running CentOS.
We have MQ/IIB installed on these instances. We need Network File System installed on Primary and Standby server.
For demo purpose, we will configure NFS server on CentOS active server and NFS client on CentOS standby server. This information is provided for a set-up, which can be configured for your table-top experiment.
All these commands are to be run as root. You can add NFS and mount commands to be run at system startup.
For production and test environments, you should plan to use enterprise grade shared file system.
3.1 Server: Install NFS
[root@localhost ~]# yum -y install nfs-utils rpcbind
[root@localhost ~]#
3.2 Server: Start NFS and update /etc/exports
[root@localhost ~]# chkconfig nfs on
[root@localhost ~]# chkconfig rpcbind on
[root@localhost ~]# chkconfig nfslock on
[root@localhost ~]# mkdir –p /Shared/Location/WMQ
(Note: This will be shared directory location that we will use to create multi-instance queue manager and broker)
[root@localhost ~]# cat /etc/exports
/Shared/Location/WMQ *(rw,sync,no_wdelay,fsid=0)
[root@localhost ~]# service rpcbind start
[root@localhost ~]# service nfs start
[root@localhost ~]# service nfslock start
[root@localhost ~]#
3.3 Client: Install NFS and create mount point
[root@localhost ~]#yum -y install nfs-utils rpcbind
[root@localhost ~]#mkdir -p /Shared/Location/WMQ
[root@localhost ~]#chkconfig nfs on
[root@localhost ~]#chkconfig rpcbind on
[root@localhost ~]#chkconfig nfslock on
3.4 Server: Change ownership and permission for mount point
Create user mqbrkrs, and add this user to group mqbrkrs
[root@localhost ~]#chown mqm:mqm /Shared/Location/WMQ
[root@localhost ~]#chown mqbrkrs:mqbrkrs /Shared/Location/IIB
[root@localhost ~]#chmod 777 /Shared/Location/IIB
[root@localhost ~]#chmod 777 /Shared/Location/WMQ
Note: Check the permissions
3.5 Client: Start NFS and mount the dir
[root@localhost ~]#service rpcbind start
[root@localhost ~]#service nfs start
[root@localhost ~]#service nfslock start
[root@localhost ~]#chkconfig rpcbind on
[root@localhost ~]#mount 192.168.33.137:/Shared/Location/WMQ /Shared/Location/WMQ
Note: 192.168.33.137 is IP address of Server
[root@localhost ~]#ls -lrt /Shared/Location/WMQ/
[root@localhost ~]#cd /Shared/Location/WMQ
3.6 Client: umount the dir, if needed
root@localhost ~]# umount /Shared/Location/WMQ
[root@localhost ~]# df
4 Create multi-instance MQ Queue Manager:
At this time, we have two instances of CentOS Virtual Machines running; One will act as Active instance and other as Standby instance for our table-top experiment.
We have MQ/IIB installed. We also have shared file-system created and related processes running on Active and Standby Virtual Machines.
Now follow below steps to create Multi-instance MQ queue manager.
Reference:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/fa70160_.htm
4.1 Logical diagram:
Mailbox Location | Lync/Skype account location | Preparation Required |
---|---|---|
Online | Online | Yes |
On-premises | Online | Yes |
Online | On-premises | Yes |
On-premises | On-premises | No* |
4.2 Check uid and gid in /etc/passwd file for user mqm and mqbrkrs
Where is your organization on its blockchain journey?
Answer | Total Number | Total % |
---|---|---|
Not yet started | 130 | 0.62 |
Identifying use cases | 55 | 0.26 |
Competing a POC | 9 | 0.04 |
Building a prototype | 7 | 0.03 |
Doing a pilot | 5 | 0.02 |
Planning to scale | 3 | 0.01 |
Total Responses: 209 of 428 (49%) |
4.3 Create log and data dirs
Poll Results for What industry are you in? | ||
---|---|---|
Start Time: April 3, 2018 12:13:22 PM MDT | ||
Total Responses: 222 of 428 (52%) | ||
Results Summary | ||
Answer | Total Number | Total % |
Automotive | 7 | 0.03 |
Energy and Utilities | 14 | 0.06 |
Financial Services and Insurance | 47 | 0.21 |
Healthcare and Life Sciences | 60 | 0.27 |
Manufacturing | 14 | 0.06 |
Retail | 22 | 0.1 |
Other | 58 | 0.26 |
4.4 Ensure shared directory is owned by user and group mqm
Poll Results for What industry are you in? | ||
---|---|---|
Start Time: April 3, 2018 12:13:22 PM MDT | ||
Total Responses: 222 of 428 (52%) | ||
Results Summary | ||
Answer | Total Number | Total % |
Automotive | 7 | 0.03 |
Energy and Utilities | 14 | 0.06 |
Financial Services and Insurance | 47 | 0.21 |
Healthcare and Life Sciences | 60 | 0.27 |
Manufacturing | 14 | 0.06 |
Retail | 22 | 0.1 |
Other | 58 | 0.26 |
4.5 Create queue manager
What industry are you in?
Answer | Total Number | Total % |
---|---|---|
Automotive | 7 | 0.03 |
Energy and Utilities | 14 | 0.06 |
Financial Services and Insurance | 47 | 0.21 |
Healthcare and Life Sciences | 60 | 0.27 |
Manufacturing | 14 | 0.06 |
Retail | 22 | 0.1 |
Other | 58 | 0.26 |
Total Responses: 222 of 428 (52%) |
4.6 Start queue manger instance
Start the queue manager instances, in either order, with the -x parameter: strmqm -x QM1
[table “11” not found /]4.7 Validate failover:
Reference:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/be13670_.htm
CsWebServiceConfiguration Parameter | Description |
---|---|
ShowJoinUsingLegacyClientLink | If set to True, users joining a meeting by using a client application other than Lync will be given the opportunity to join the meeting. The default value is False. |
ShowAlternateJoinOptionsExpanded | When set to True, alternate options for joining an online conference will automatically be expanded and shown to users. When set to False (the default value), these options will be available, but the user will have to display the list of options for themselves. |
5 Create multi-instance broker:
Reference:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/be13682_.htm
CsWebServiceConfiguration Parameter | Description |
---|---|
ShowJoinUsingLegacyClientLink | If set to True, users joining a meeting by using a client application other than Lync will be given the opportunity to join the meeting. The default value is False. |
ShowAlternateJoinOptionsExpanded | When set to True, alternate options for joining an online conference will automatically be expanded and shown to users. When set to False (the default value), these options will be available, but the user will have to display the list of options for themselves. |
6 Configuring MQ client for failover:
Now we should have Active/Standby setup ready for MQ/IIB. Before we declare victory, we need clients to be able to connect to standby MQ/IIB servers, in the event of fail-over.
I would say, not checking fail-over from client is like giving up on one-yard line.
And this will be last procedure to be performed to complete this topic of active-standby installation and configuration.
6.1 Automatic Client Reconnection in Java SE
Refer to IBM article http://www-01.ibm.com/support/docview.wss?uid=swg21508357 , for creating bindings file for MQ 7.5.
1. Set-up environment:
(Get details on Installations. Find Installation for which you need to setup environment.
cat /etc/opt/mqm/mqinst.ini)
$ cat set-mq-75.ksh
#!/usr/bin/ksh
# Name: set-mq-75.ksh
# Purpose: to setup the environment to run MQ 7.5
. /opt/mqm/bin/setmqenv -n Installation1
# Additional MQ 7.5 directories for the PATH
export PATH=$PATH:$MQ_INSTALLATION_PATH/bin:$MQ_INSTALLATION_PATH/java/bin:$MQ_INSTALLATION_PATH/samp/bin:$MQ_INSTALLATION_PATH/samp/jms/samples:
# Add local directory for running Java/JMS programs
export CLASSPATH=$CLASSPATH:.
# end
(Run setup script)
$ . set-mq-75.ksh
2. Create JMSAdmin.config file.
cp $MQ_JAVA_INSTALL_PATH/bin/JMSAdmin.config /var/mqm/JMSAdmin.config
chmod 644 /var/mqm/JMSAdmin.config
(provide values to following three properties)
$cat JMSAdmin.config | grep -v ^#
INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory
PROVIDER_URL=file:/var/mqm/JNDIDirectory/config
SECURITY_AUTHENTICATION=none
3. Create Dir:/var/mqm/JNDIDirectory/config
$mkdir –p /var/mqm/JNDIDirectory/config
4. Script to invoke JSMAdmin tool.
myJMSAdmin.sh
(This script Runs JMSAdmin and takes cfg file as input; bindfile.txt is directed as input to this script. This script will create .bindings file under /var/mqm/JNDIDirectory/config directory, this location can be changed in JMSAdmin.config file.)
$cat myJMSAdmin.sh
#!/usr/bin/ksh
echo “running: JMSAdmin -cfg /var/mqm/JMSAdmin.config”
$MQ_JAVA_INSTALL_PATH/bin/JMSAdmin -cfg /var/mqm/JMSAdmin.config < /var/mqm/JNDIDirectory/bindfile.txt
============= end of script
5. Contents of bindfile.txt:
$cat bindfile.txt
def qcf(QCF_TST) QMGR(QMTST1) TRANSPORT(CLIENT) CHANNEL(TST1.SVR.CHANNEL) CONNAME(‘192.98.39.01(11414), 192.98.39.02(11414)’)
def q(TEST.IN) qu(TEST.IN) qmgr(QMTST1)
def q(APP1.IN) qu(APP1.IN) qmgr(QMTST1)
Where:
QCF_TST is name of queue connection factory.
QMTST1 is name of queue manager (which will be same for active and standby queue manager)
TST1.SVR.CHANNEL is name of channel that will be used by QCF.
CONNAME is comma separated list of queue manger IP and Port.
TEST.IN and APP1.IN are name of queues which will be included in .bindings file.
Note: Add queues as per need to bindfile.txt. You can define more than one QCF.
6. Once myJMSAdmin.sh script is run, .bindings file generated under /var/mqm/JNDIDirectory/config.
Use .bindings file from client application to connect to MQ Queues in Active/Standby MQ/IIB installation.
In event of fail-over, client will be able to connect to standby server.
Reference:
http://www-01.ibm.com/support/docview.wss?uid=swg21614256
6.2 Mod Proxy plugin fail-over
Previous topic dealt with connecting to MQ Queues in event of fail-over.
For fail-over testing of HTTP traffic in Active/Standby MQ/IIB installation refer to following IBM articles.
http://www.ibm.com/developerworks/websphere/library/techarticles/1306_gupta/1306_gupta.html
http://www.ibm.com/developerworks/websphere/library/techarticles/1308_gupta/1308_gupta.html
7 Conclusion:
Following this procedure, I was able to complete my table-top experiment of installing, configuring MQ/IIB high availability and whole nine yards, with in few hours. Now you can go and tackle real world scenarios with ease.
8 Acknowledgements:
Special Thanks to my colleague Milan Das, for helping me to get started on this topic.
9 Reference:
Configuring multi-instance Queue Manager and IBM Integration Bus.
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/fa70160_.htm