Author Archives: Niklas

IBM MQ: Why is my cluster sender channel trying to connect to 1414 when I have clearly specified 1415 (or any other port)?

Cluster sender channels seems to get “stuck” sometimes and fall back to port 1414 when trying to find the cluster repository.

The solution:
Delete the channel and create it again

DEFINE CHANNEL(TO.REPO) CHLTYPE(CLUSSDR) TRPTYPE(TCP) CONNAME('127.0.0.1(1415)') CLUSTER(EXTERNAL.CLIENTS) DESCR('Cluster-sender channel from MYQM01 to repo at MYREPO01')

Tested on MQ 9.0.5.0 and Red Hat Linux 7.5

IBM MQ: Add a server certificate to queue manager without an CSR

At my workplace we request SSL certificates based on the server and not on queue manager. Often are these servers populated with more services than MQ so a CSR from MQ might not be possible. In this case we need to get the certificate and key into the queue manager keystore without an CSR. Here is how we usually do it

Through of of this example I am going to use the ikeycmd program, normally found here: /opt/mqm/java/jre64/jre/bin/ikeycmd in the MQ installation on Linux, and openssl which can be found in most Linux systems. We will call the queue manager MYQM01 in this example.

First we need to create a kdb file to hold our certificates

ikeycmd -keydb -create -db "/var/mqm/qmgrs/MYQM01/ssl/key.kbd" -pw changeit -type cms -stash

Where:
db is the path to the queue managers key.kdb file
stash tells ikeycmd to stash the password in a file in the same location as the key.kdb file. This is needed so that MQ later can open the key.kdb file and read its contents

It is now time to add the root cert and all its intermediate certificates (if any). It is important that this is done in the correct order: From root and down to your certificate
Add root cert:

ikeycmd -cert -add -db "/var/mqm/qmgrs/MYQM01/ssl/key.kbd" -pw changeit -label rootca -file DigicertRoot.crt -format ascii

Add ca cert/s:

ikeycmd -cert -add -db "/var/mqm/qmgrs/MYQM01/ssl/key.kbd" -pw changeit -label intermediateca -file DigiCertCA.crt -format ascii

And now to the magic. There are probably many ways to do this but I found creating a p12 file with the certificate and the key to be the simplest
Create the p12 file

openssl pkcs12 -export -in my.host.com.crt -inkey my.host.com.key -out my.host.com.p12 -name "ibmwebspheremqmyqm01"

Import the p12 into the queue manager keystore

ikeycmd -cert -import -db my.host.com.p12 -pw changeit -target "/var/mqm/qmgrs/MYQM01/ssl/key.kbd"

Now set the the new certificate as default

ikeycmd -cert -setdefault -db "/var/mqm/qmgrs/MYQM01/ssl/key.kbd" -stashed -label "ibmwebspheremqmyqm01"

Make sure the key* files has the correct permissions

chmod 640 key.*

Troubleshooting tips

# List personal and ca certificate in the kbd file 
/opt/mqm/java/jre64/jre/bin/ikeycmd -cert -list personal -db "/var/mqm/qmgrs/MYQM01/ssl/key.kbd" -pw changeit
/opt/mqm/java/jre64/jre/bin/ikeycmd -cert -list ca -db "/var/mqm/qmgrs/MYQM01/ssl/key.kbd" -pw changeit

# List default all signers for this installation
/opt/mqm/java/jre64/jre/bin/ikeycmd -cert -listsigners

# Check that a certificate is presented on connect
openssl s_client -connect my.host.com:1414

Tested on MQ 9.0.5.0, Red Hat Linux 7.5 and OpenSSL 1.0.2k-fips

IBM MQ: Setup a dedicated “client” queue manager using cluster technology

I am in this post going to show how I setup a dedicated “clients” queue manager where all queue managers are on the same machine. This can be useful when you want clients to connect to you in a dev or test environment and you do not want them to interfere with your work, and still easily be able to help them with messages.
This example will also show you have to setup any cluster since the tasks are pretty much the same.

We are going to need two queue managers. The one that we are working on (WORK01), and one for the clients to connect to (CLIENTDEV01). To enable sending messages between them we are going to setup a small cluster

We start with making our WORK01 queue manager a full repository and start the cluster there. All commands are for the runmqsc interpreter but can be done via MQExplorer if you like a GUI better

ALTER QMGR REPOS(CLIENTS)

This puts our WORK01 queue manager into a cluster named CLIENTS (which only contains one queue manager at this time)

Now we need a listener for communication…

DEFINE LISTENER(CLUSTER.LISTENER) TRPTYPE(TCP) CONTROL(QMGR) PORT(1420)
START LISTENER(CLUSTER.LISTENER)

This creates a listener for port 1420 and it is started/stopped with the queue manager. This way we don’t have to worry about forgetting to start it

…and channels for messages

DEFINE CHANNEL(TO.WORK01) CHLTYPE(CLUSRCVR) TRPTYPE(TCP) CONNAME('127.0.0.1(1420))') CLUSTER(CLIENTS) DESCR('TCP Cluster-receiver channel for queue manager WORK01')

Create a cluster receiver channel pointing to our listener and a part of the CLIENTS cluster

Now we are done with our WORK01 queue manager. Time to move on to our CLIENTDEV01 queue manager. This queue manager is not going to be a full repository so we do not need to put the queue manager into the CLIENTS cluster. We can here instead choose what objects (queues, topics and so on) to be part of the cluster

We create a listener for communication with the cluster

DEFINE LISTENER(CLUSTER.LISTENER) TRPTYPE(TCP) CONTROL(QMGR) PORT(1421)
START LISTENER(CLUSTER.LISTENER)

Standard TCP listener on port 1421, just as that one for there WORK01 queue manager

A few channels to send messages over

DEFINE CHANNEL(TO.CLIENTDEV01) CHLTYPE(CLUSRCVR) TRPTYPE(TCP) CONNAME('127.0.0.1(1421))') CLUSTER(CLIENTS) DESCR('TCP Cluster-receiver channel for the CLIENTDEV01 queue manager')

This channel points to the listener and is meant to receive messages from the cluster CLIENTS

DEFINE CHANNEL(TO.WORK01) CHLTYPE(CLUSSDR) TRPTYPE(TCP) CONNAME('127.0.0.1(1420)') CLUSTER(CLIENTS) DESCR('Cluster-sender channel from CLIENTDEV01 to the repo WORK01')

Now this needs a little more explaining:
* Cluster sender channel to the full repository on WORK01
* Port (1420) needs to be the same as the listener on the WORK01 queue manager
* Channel name (TO.WORK01) has to have the same name as the cluster receiver channel on the WORK01 queue manager
* Cluster sender channels should ONLY point to full repositories, so we are not going to point any cluster sender channel to our CLIENTSDEV1 queue manager which is a partial repository, only containing the object we define in it
* Cluster channels does not need to be started, they are started automatically

Done!

To test you can now define a queue alias on CLIENTDEV01 queue manager that points to a queue in the CLIENTS cluster on WORK01 and see if messages gets through

Troubleshooting tips:

# Ping remote queue manager through it's cluster receiver channel
runmqsc: PING CHANNEL(<remote qmanager cluster receiver channel>)
# Display channel status on all channels - here you can see the status of the cluster sender/receiver channels
runmqsc: DIS CHSTATUS(*)
# Displays all cluster qmanagers (full or partial) and their clusters names
runmqsc: DIS CLUSQMGR(*)

Tested on MQ7.1.0.1 (RHEL 6.8) and MQ9.0.5.0(RHEL 7.5)