Friday, December 26, 2014

Using "Shared Channels" in a Queue Sharing Group

When IBM introduced the notion of "shared queues" in WebSphere MQ....this allowed organizations the ability for a more fault tolerant, continuously available environment. Along with this, is the ability to have shared channels.  In this post, I will be going over the set-up of using shared "sender" as well as shared "receiver" channels.

Shared Sender Channels:

In order to have a shared sender channel.....there are three/four items that need to be set-up.
       
         1. Define a transmission queue (XMITQ) that is shared within the QSG (Queue Sharing Group).

         2. Define a Group/Copy remote queue that uses the shared transmission queue defined in step 1.

         3. Define a Group/Copy Sender channel using the shared transmission queue defined in step 1.

         Optional: (For triggering)

         4. In the transmission queue definition, set trigger type to "F"irst, trigger set to "Y". In the  
             Initiation Queue name put SYSTEM.CHANNEL.INITQ and finally put the channel name in
             the Trigger Data.

Now, when a message arrives on the transmission queue, this channel will start on any queue manager in the QSG that has a channel initiator running.  Message sequence numbers will be tracked and stored on the SYSTEM.QSG.CHANNEL.SYNCQ...which is a shared queue in the QSG.

For sender channels....that is all that is needed to make them "shared".

Shared Receiver Channels:

Shared receiver channels will require a little more set-up, as we will need a separate listener started with a Inbound Disposition, INDISP set to GROUP.  This listener will need to be listening on a separate port than the listener task started with the INDISP of QMGR...the normal listener task for each queue manager.

So in your CHINIT task PROC, for the CSQINPX input dataset, you will have two start listener commands.  One for the QMGR and one for the GROUP.

START LISTENER PORT(1414) TRPTYPE(TCP) INDISP(QMGR)
START LISTENER PORT(1415) TRPTYPE(TCP) INDISP(GROUP)

Now, since each queue manager in the QSG will be listening on a different IP address, you will need to set-up a group IP (DVIPA) in Communications Server to be used by Sysplex Distributor, and have this group IP spray the connections across the channel initiators listening on the GROUP port.

Sample Set-up: (These are all fictitious IP addresses)

       VIPADEFINE     MOVE IMMEDIATE 255.255.255.0 57.202.125.200          
    VIPADISTRIBUTE DISTMETHOD SERVERWLM                                 
             57.202.125.200 PORT 1415 DESTIP 57.202.125.1  57.202.125.2                                                                    57.202.125.3  57.202.125.4 
   
In the sample above, we have a DVIPA (Dynamic VIPA) address of 57.202.125.200 created...and we have selected to have the connections coming on a 57.202.125.200 port 1415 distributed based on WLM rules to the four IP addresses listed after the DESTIP tag.  These four IP's are where the GROUP listener for each queue managers channel initiator are listening on port 1415.

For more specifics on the DVIPA options you can use, see the IBM Information Center.

When in incoming connection for a shared receiver channel comes in, it will get distributed to the least used channel initiator, and since we have a shared channel, the SYNCQ used for message sequence number tracking will be SYSTEM.QSG.CHANNEL.SYNCQ.  This SYNCQ is also a shared queue and can be seen by all queue managers in the QSG.

That is all there is to setting up shared sender and receiver channels.

Happy messaging!!!



   



Tuesday, December 23, 2014

Spinning off your JES Log data

If you have z/OS queue managers that are up and running for a long time between recycles or system IPL's...you have probably seen the JES spool usage for these tasks grow quite large, especially if you have SVRCONN client connections connecting and disconnecting at a high rate.

Well....there is a way to have your JES spin off at predetermined intervals....and it is really quite easy. Now this is not just for WebSphere MQ started tasks.....this can be used for IMS, CICS, DB2....or virtually any started task that ends up producing a lot of JES messages and using up spool space.

Step one will be to ensure that the dataset that includes the queue manager and channel initiator JCL is included in the MASTER JCL IEFPDSI concatenation. This can be found in the 'SYS1.PARMLIB' member MSTJCLxx.  If a change to this member needs to be made...an IPL will be required. It might be easier to place queue manager and chinit JCL in a dataset already in the concatenation.

Once you have these PROC's in a MASTER JCL library, you will need to edit them to add some entries to allow the spin off. The spin off can be set by a number of lines, or set to spin at a specific time.

We will be changing the PROC statement to a JOB statement. Make sure your MSGCLASS is one that goes to a JHS...or other storage software so you will have the JES for possible troubleshooting or historical purposes.

Before:

//MQ1AMSTR PROC

After:

//MQ1AMSTR JOB JESLOG=(SPIN,1K),MSGCLASS=U,MSGLEVEL=1

If you are using any JCL variables in your PROC....these will need to done by a SET= command when changing to the JOB format.

Now, for the SPIN options.....you can have the following formats:

 JESLOG=(SPIN,15:00)  - SPIN off at 3:00 PM every day
 JESLOG=(SPIN,4000)   - SPIN off every 4000 lines of output
 JESLOG=(SPIN,2K)      - SPIN off every 2000 lines of output

You can decide based on your organizations needs...what the appropriate number would be for line count....or maybe just spin every day at midnight.

I hope this helps some of you that have long running queue managers that eat up a lot of spool space.

Have fun!!!





Monday, September 22, 2014

Problem in MQ v7.1 using permanent dynamic queues

IBM just release a "Flash Alert" on page set zero chain corruption using permanent dynamic queues.

After applying UI11858 or its superseded UI13085...some users may experience a chain corruption in PSID(0) causing a loop in a queue manager SRB.

Full details of the problem and its symptoms as well as abend details can be found here:

http://www-01.ibm.com/support/docview.wssuid=swg21684118&myns=swgws&mynp=OCSSFKSJ&mync=E

Happy messaging.