Thursday, November 17, 2016

Weak or broken cipher spec's are blocked - PTF UI29471 - MQ V8.0

Let's talk about cipher spec's......

For MQ V8 on z/OS...a PTF was released (UI29471 - Oct. 8th 2015) that when installed...will prohibit the use of weak or broken ciphers as well as SSLv3.

The following ciphers will be affected:

RC4_SHA_US                 (SSL 3.0)
RC4_MD5_US                 (SSL 3.0)
TRIPLE_DES_SHA_US          (SSL 3.0)
RC4_MD5_EXPORT             (SSL 3.0)
RC2_MD5_EXPORT             (SSL 3.0)
DES_SHA_EXPORT             (SSL 3.0)
NULL_SHA                   (SSL 3.0)
NULL_MD5                   (SSL 3.0)
TLS_RSA_WITH_DES_CBC_SHA   (TLS 1.0)

In order to check if you have any of these ciphers running in your z/OS environment, the following commands can be issued:

DISPLAY CHL(*) WHERE(SSLCIPH EQ RC4_SHA_US)
DISPLAY CHL(*) WHERE(SSLCIPH EQ RC4_MD5_US)
DISPLAY CHL(*) WHERE(SSLCIPH EQ TRIPLE_DES_SHA_US)
DISPLAY CHL(*) WHERE(SSLCIPH EQ RC4_MD5_EXPORT)
DISPLAY CHL(*) WHERE(SSLCIPH EQ RC2_MD5_EXPORT)
DISPLAY CHL(*) WHERE(SSLCIPH EQ DES_SHA_EXPORT)
DISPLAY CHL(*) WHERE(SSLCIPH EQ NULL_SHA)
DISPLAY CHL(*) WHERE(SSLCIPH EQ NULL_MD5)
DISPLAY CHL(*) WHERE(SSLCIPH EQ TLS_RSA_WITH_DES_CBC_SHA)


In the event that you have one of these installed and still want to install the current set of z/OS MQ maintenance....there is a work around.

To allow weak cipher spec's, the following DD statement can be added to your CHIN address space PROC:

//CSQXWEAK  DD DUMMY

To allow SSLv3 based cipher spec's, the following DD statement can be added to you CHIN address space:

//CSQXSSL3  DD DUMMY

If you are wanting to allow both weak as well as SSLv3 ciphers, both of these DD statements need to be added to the CHIN PROC.

Once these have been added, and your channel initiator has been restarted....you will see the following messages in the JESOUT:

CSQX691I +MQQM CSQXSSLI Cipher specifications based on the SSLv3 protocol are enabled    
                                                
CSQX693I +MQQM CSQXSSLI Weak or broken SSL cipher specifications are enabled  


Please remember.....it is better to get all of your channels using the current stronger ciphers, but it some cases with older version of MQ...using these cannot be avoided.

Happy messaging!!

Wednesday, March 16, 2016

WebSphere MQ Version 8, z/OS 8 byte log RBA - Converting your bootstraps.

In previous versions of WebSphere MQ, the LOG RBA was limited to 6 bytes, leaving enterprises to monitor the RBA so as not to exhaust the RBA and have the queue manager crash.  In order to reset the RBA, new bootstraps and logs needed to be allocated followed by running the CSQUTIL program using the RESETPAGE FORCE option.

Now, with version 8, the RBA has been extended to 8 bytes, virtually eliminating the possibility of running out of RBA log range.  Using a 6 byte RBA gave us a 256 terabyte log range. Now, with 8 bytes, we have 16 exabytes of log range, or 64K times more.  A queue manager running at 100MB per second, would take over 5000 years to run out of RBA log range.

So how do we convert to the 8 byte RBA.  If you are a standalone queue manager, you just need to be on MQ version 8 running in the NEWFUNC OPMODE.  If you are a queue manager running in a Queue Sharing Group (QSG), all of the members of the QSG need to be running version 8 using the NEWFUNC OPMODE.

Steps to convert your bootstrap to version 2 (8 byte):

1.) Stop queue manager cleanly.

2). Allocate a new bootstrap or set of bootstrap data sets using a different name than the currently used by the queue manager..

3). Run the CSQJUCNV conversion utility using the correct PARM.

      Standalone Queue Manager:
           PARM=('NOQSG')
      Queue Sharing Group Queue Manager:
           PARM=('INQSG,QSGroupName,DataSharingGroupName,DB2Member')

4). Rename the current bootstraps to another name...i.e. V1.

5). Rename the new V2 bootstraps to the names used by the queue manager.

6). Start the queue manager.

7). Verify that the new RBA range is shown in the JES.

    CSQJ034I <MQM1 CSQJW007 END OF LOG RBA RANGE IS FFFFFFFFFFFFFFFF

    The previous message showed the RBA as 0000FFFFFFFFFFFF

Below is some sample JCL using just a single bootstrap dataset.

Step 1: Allocate a new bootstrap with a different name:

//STEP1   EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN    DD *
 DELETE (COM.MQM1.V8.BSDS01)   <=== NOTICE V8, New name
    DEFINE CLUSTER                          -
           (NAME(COM.MQM1.V8.BSDS01)        -
            VOLUMES(DCPSMS)                 -
            UNIQUE                          -
            SHAREOPTIONS(2 3) )             -
        DATA                                -
           (NAME(COM.MQM1.V8.BSDS01.DATA)   -
            RECORDS(850 60)                 -
            RECORDSIZE(4089 4089)           -
            CONTROLINTERVALSIZE(4096)       -
            FREESPACE(0 20)                 -
            KEYS(4 0) )                     -
       INDEX                                -
          (NAME(COM.MQM1.V8.BSDS01.INDEX)   -
           RECORDS(5 5)                     -
           CONTROLINTERVALSIZE(1024) )

            If you have 2 bootstraps....you would also allocate a V8.BSDS02

Step 2: Run the bootstrap conversion utility:

    //STEP3    EXEC PGM=CSQJUCNV,REGION=32M
  //*            PARM=('INQSG,++QSGNAME++,++DSGNAME++,++DB2SSID++') *QSG*
  //             PARM=('NOQSG')                                     *NOQSG*
  //SYSPRINT DD SYSOUT=*
  //SYSUT1   DD DSN=COM.MQM1.BSDS01,DISP=SHR        <== current BSDS01
  //*SYSUT2   DD DSN=COM.MQM1.BSDS02,DISP=SHR       <== current BSDS02 
  //SYSUT3   DD DSN=COM.MQM1.V8.BSDS01,DISP=OLD     <== new BSDS01
  //*SYSUT4   DD DSN=COM.MQM1.V8.BSDS02,DISP=OLD    <== new BSDS02


           In the above JCL, SYSUT2 and SYSUT 4 are commented out 
           since we are only doing a queue manager with a single BSDS. 
           If you are converting a queue manager running DUAL bootstraps, 
           then all 4 SYSUT# will be used.

Step 3: Rename the current bootstraps to a V71 name to move them out of the way:

    //STEP4     EXEC PGM=IDCAMS
  //SYSPRINT  DD   SYSOUT=*
  //SYSIN     DD   *
    ALTER 'COM.MQM1.BSDS01'       NEWNAME('COM.MQM1.V71.BSDS01')
    ALTER 'COM.MQM1.BSDS01.DATA'  NEWNAME('COM.MQM1.V71.BSDS01.DATA')
    ALTER 'COM.MQM1.BSDS01.INDEX' NEWNAME('COM.MQM1.V71.BSDS01.INDEX')
  //*

Step 4: Rename the new bootstraps (8 byte RBA) to the name the queue manager uses:

    //STEP4     EXEC PGM=IDCAMS
  //SYSPRINT  DD   SYSOUT=*
  //SYSIN     DD   *
    ALTER 'COM.MQM1.V8.BSDS01'       NEWNAME('COM.MQM1.BSDS01')
    ALTER 'COM.MQM1.V8.BSDS01.DATA'  NEWNAME('COM.MQM1.BSDS01.DATA')
    ALTER 'COM.MQM1.V8.BSDS01.INDEX' NEWNAME('COM.MQM1.BSDS01.INDEX')
  //*

Step 5: Start queue manager and check for new LOG RBA RANGE:

       CSQJ034I <MQM1 CSQJW007 END OF LOG RBA RANGE IS FFFFFFFFFFFFFFFF


That's it!!

Happy messaging.

Friday, December 26, 2014

Using "Shared Channels" in a Queue Sharing Group

When IBM introduced the notion of "shared queues" in WebSphere MQ....this allowed organizations the ability for a more fault tolerant, continuously available environment. Along with this, is the ability to have shared channels.  In this post, I will be going over the set-up of using shared "sender" as well as shared "receiver" channels.

Shared Sender Channels:

In order to have a shared sender channel.....there are three/four items that need to be set-up.
       
         1. Define a transmission queue (XMITQ) that is shared within the QSG (Queue Sharing Group).

         2. Define a Group/Copy remote queue that uses the shared transmission queue defined in step 1.

         3. Define a Group/Copy Sender channel using the shared transmission queue defined in step 1.

         Optional: (For triggering)

         4. In the transmission queue definition, set trigger type to "F"irst, trigger set to "Y". In the  
             Initiation Queue name put SYSTEM.CHANNEL.INITQ and finally put the channel name in
             the Trigger Data.

Now, when a message arrives on the transmission queue, this channel will start on any queue manager in the QSG that has a channel initiator running.  Message sequence numbers will be tracked and stored on the SYSTEM.QSG.CHANNEL.SYNCQ...which is a shared queue in the QSG.

For sender channels....that is all that is needed to make them "shared".

Shared Receiver Channels:

Shared receiver channels will require a little more set-up, as we will need a separate listener started with a Inbound Disposition, INDISP set to GROUP.  This listener will need to be listening on a separate port than the listener task started with the INDISP of QMGR...the normal listener task for each queue manager.

So in your CHINIT task PROC, for the CSQINPX input dataset, you will have two start listener commands.  One for the QMGR and one for the GROUP.

START LISTENER PORT(1414) TRPTYPE(TCP) INDISP(QMGR)
START LISTENER PORT(1415) TRPTYPE(TCP) INDISP(GROUP)

Now, since each queue manager in the QSG will be listening on a different IP address, you will need to set-up a group IP (DVIPA) in Communications Server to be used by Sysplex Distributor, and have this group IP spray the connections across the channel initiators listening on the GROUP port.

Sample Set-up: (These are all fictitious IP addresses)

       VIPADEFINE     MOVE IMMEDIATE 255.255.255.0 57.202.125.200          
    VIPADISTRIBUTE DISTMETHOD SERVERWLM                                 
             57.202.125.200 PORT 1415 DESTIP 57.202.125.1  57.202.125.2                                                                    57.202.125.3  57.202.125.4 
   
In the sample above, we have a DVIPA (Dynamic VIPA) address of 57.202.125.200 created...and we have selected to have the connections coming on a 57.202.125.200 port 1415 distributed based on WLM rules to the four IP addresses listed after the DESTIP tag.  These four IP's are where the GROUP listener for each queue managers channel initiator are listening on port 1415.

For more specifics on the DVIPA options you can use, see the IBM Information Center.

When in incoming connection for a shared receiver channel comes in, it will get distributed to the least used channel initiator, and since we have a shared channel, the SYNCQ used for message sequence number tracking will be SYSTEM.QSG.CHANNEL.SYNCQ.  This SYNCQ is also a shared queue and can be seen by all queue managers in the QSG.

That is all there is to setting up shared sender and receiver channels.

Happy messaging!!!



   



Tuesday, December 23, 2014

Spinning off your JES Log data

If you have z/OS queue managers that are up and running for a long time between recycles or system IPL's...you have probably seen the JES spool usage for these tasks grow quite large, especially if you have SVRCONN client connections connecting and disconnecting at a high rate.

Well....there is a way to have your JES spin off at predetermined intervals....and it is really quite easy. Now this is not just for WebSphere MQ started tasks.....this can be used for IMS, CICS, DB2....or virtually any started task that ends up producing a lot of JES messages and using up spool space.

Step one will be to ensure that the dataset that includes the queue manager and channel initiator JCL is included in the MASTER JCL IEFPDSI concatenation. This can be found in the 'SYS1.PARMLIB' member MSTJCLxx.  If a change to this member needs to be made...an IPL will be required. It might be easier to place queue manager and chinit JCL in a dataset already in the concatenation.

Once you have these PROC's in a MASTER JCL library, you will need to edit them to add some entries to allow the spin off. The spin off can be set by a number of lines, or set to spin at a specific time.

We will be changing the PROC statement to a JOB statement. Make sure your MSGCLASS is one that goes to a JHS...or other storage software so you will have the JES for possible troubleshooting or historical purposes.

Before:

//MQ1AMSTR PROC

After:

//MQ1AMSTR JOB JESLOG=(SPIN,1K),MSGCLASS=U,MSGLEVEL=1

If you are using any JCL variables in your PROC....these will need to done by a SET= command when changing to the JOB format.

Now, for the SPIN options.....you can have the following formats:

 JESLOG=(SPIN,15:00)  - SPIN off at 3:00 PM every day
 JESLOG=(SPIN,4000)   - SPIN off every 4000 lines of output
 JESLOG=(SPIN,2K)      - SPIN off every 2000 lines of output

You can decide based on your organizations needs...what the appropriate number would be for line count....or maybe just spin every day at midnight.

I hope this helps some of you that have long running queue managers that eat up a lot of spool space.

Have fun!!!





Monday, September 22, 2014

Problem in MQ v7.1 using permanent dynamic queues

IBM just release a "Flash Alert" on page set zero chain corruption using permanent dynamic queues.

After applying UI11858 or its superseded UI13085...some users may experience a chain corruption in PSID(0) causing a loop in a queue manager SRB.

Full details of the problem and its symptoms as well as abend details can be found here:

http://www-01.ibm.com/support/docview.wssuid=swg21684118&myns=swgws&mynp=OCSSFKSJ&mync=E

Happy messaging.

Sunday, February 17, 2013

Extending Shared Queues using Shared Message Datasets


In this blog entry I will talk about what Shared Message Datasets (SMDS) are, what problem(s) they solve and how to implement and manage SMDS in your Queue Sharing Group (QSG).

What are Shared Message Datasets?

Currently, shared message queues and their message data are stored in structures that are built in the Coupling Facility.  These structures are defined with a fixed size, so all message queues assigned to a particular structure, are constrained by this size.  Once a CF structure is full, the reason code 2192 is issued, indicating that the storage medium for that queue is full. In V7.1 new function mode, a new construct called a Shared Message Dataset has been introduced to prevent CF structures from filling up.  The SMDS is a linear VSAM dataset much like a pageset for non-shared queues today, but is associated with a CF structure.

Implementing Shared Message Datasets

Before we can implement a SMDS architecture all queue managers in the queue sharing group need to have the OPMODE=(NEWFUNC ,710) set in the ZPARM.  Once this has been set, and the queue managers have been restarted we can now allocate the Shared Message Datasets.

Each Queue Manager in the queue sharing group will have its own copy of the SMDS for a coupling facility structure.  This is designated by the use of an * in the DSGROUP parameter of the CFSTRUCT definition. 

See below:
                             Queue Manager                                         Queue Manager
                                    QMA1                                                      QMB1
                                        /                                                                 \
                                      /                   Queue Sharing Group               \
                                    /                              (QM01)                             \
    'MQVS.QM01.QMA1.SHAREQ.SMDS'                                                     'MQVS.QM01.QMB1.SHAREQ.SMDS'
                                                                CFSTRUCT
                                                                 (SHAREQ)
                                             
                                                            DSGROUP('MQVS.QM01.*.SHAREQ.SMDS')

You can see the DSGROUP attribute is MQVS.QM01.*.SHAREQ.SMDS. The queue manager will take the * and substitute its name for the SMDS it is to use.

Now, let’s create and format the shared message datasets we want to use.  This is done using two utilities, IDCAMS and CSQJUFMT.

//CRESMDSA   JOB (ABCD,1234),'J LANG',CLASS=C,MSGCLASS=H
//*
//***********************************************************
//*   Allocate the SMDSs
//***********************************************************
//DEFINE   EXEC PGM=IDCAMS,REGION=4M
//SYSPRINT DD  SYSOUT=*
//SYSIN    DD  *
   DELETE 'MQVS.QM01.QMA1.SHAREQ.SMDS' ERASE CLUSTER
   SET MAXCC=0
   DEFINE CLUSTER                        -
          (NAME(MQVS.QM01.QMA1.SHAREQ.SMDS) -
           MEGABYTES(200 300)    -
           LINEAR                        -
           DATACLAS(DCXVSM)              -
           SHAREOPTIONS(2 3) )           -
       DATA                              -
          (NAME(MQVS.QM01.QMA1.SHAREQ.SMDS.DATA) )
/*
//***********************************************************
//*   Format the SMDS
//***********************************************************
//FORM     EXEC PGM=CSQJUFMT,COND=(0,NE),REGION=0M
//STEPLIB  DD  DSN=MQM.SCSQANLE,DISP=SHR
//         DD  DSN=MQM.SCSQAUTH,DISP=SHR
//SYSUT1   DD  DISP=OLD,DSN=MQVS.QM01.QMA1.SHAREQ.SMDS
//SYSPRINT DD  SYSOUT=*

The job above is for the QMA1 queue manager…you will also need to change the QMA1 to QMB1 and run the job again. Also you will notice a DATACLAS above, this is telling IDCAMS that this VSAM file has extended addressability and can be expanded past the 4GB limit.  It can actually grow to 16TB given that you have that amount of DASD available.

So now we have our shared message datasets for each of our queue managers in the queue sharing group, and we also have each queue manager running with the OPMODE=(NEWFUNC ,710). So we have two options, we can alter our current CTSTRUCT to Level 5 and set the appropriate variables, or we can create a new CFSTRUCT set to Level 5.  If we are creating a new struct, work will need to be done in the coupling facility policy to allocate this new structure, and the new policy will need to be started to become active.  For the second option you can get with your z/OS systems programmer and they will handle that portion.

For now, let’s look at altering a current CFSTRUCT.

It is easiest to do this in a batch job since several new attributes need to be set at the same time.


//SMDSJOBA  JOB (ABCD,1234),'J LANG',
//    CLASS=C,
//    MSGCLASS=H
//*
//EXTRACT EXEC PGM=CSQUTIL,PARM='QMA1'
//STEPLIB  DD DSN=MQM.SCSQANLE,DISP=SHR
//         DD DSN=MQMQ.QMA1.SCSQAUTH,DISP=SHR
//         DD DSN=MQM.SCSQAUTH,DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSIN    DD *
COMMAND DDNAME(INPUT)
/*
//*
//INPUT    DD *
ALTER                                                          -
 CFSTRUCT('SHAREQ')                                            -
 DESCR('Level 5 CFSTRUCT')                                     -
 CFLEVEL(5)                                                    -
 RECOVER(YES)                                                  -
 OFFLOAD(SMDS) see note 1                                      -
 OFFLD1TH(70)  see note 2                                      -
 OFFLD1SZ(32K)                                                -
 OFFLD2TH(80)                                                -
 OFFLD2SZ(4K)                                                -
 OFFLD3TH(90)                                                 -
 OFFLD3SZ(0K)                                                -
 DSGROUP('MQVS.QM01.*.SHAREQ.SMDS')                            -
 DSBLOCK(256K)                                                 -
 DSBUFS(100)                                                   -
 DSEXPAND(YES)                                                 -
 RECAUTO(NO)                                                   -
 CFCONLOS(TERMINATE)
/*
//*

Note 1: When changing the level of a CFSTRUCT to 5, the OFFLOAD parameter must be set to DB2 or SMDS.  In our case, we are using shared message datasets due to the performance impact of using DB2. (this topic is for another paper)

Note 2: There are three sets of offload rules that the queue manager uses. These contain the percentage that the coupling facility structure must reach before the rule kicks in.  In our listing above, we have the following:

            OFFLD1TH(70) – When the coupling facility reaches 70% full
            OFFLD1SZ(32K) – New messages greater than 32K go to the SMDS

            OFFLD2TH(80) – When the coupling facility reaches 80% full
            OFFLD2SZ(4K) – New messages greater than 4K go to the SMDS


            OFFLD3TH(90) – When the coupling facility reaches 90% full
            OFFLD3SZ(32K) – New messages greater than 0K go to the SMDS

Special Note: There is still some data stored in the coupling facility for a message that is offloaded to the SMDS, so the larger your messages are, the more benefit you will get from using SMDS.  The 3rd rule here, says that if the CF is 90% full, all new messages go to the SMDS.  For messages that are under 140 bytes, you will not benefit from this rule, and the CF will still become 100% full because while it is moving let’s say 100 bytes messages to the SMDS, it is using more space than that to control the pointers to these messages.

So now we have our CF at level 5, and we have our SMDS allocated.  You need to make sure the that the queue manager started task ID’s have UPDATE access to the SMDS VSAM datasets. (RACF Rules). The owning queue manager will have its SMDS open for UPDATE, other queue managers in the group will have all other SMDS’s open for READONLY.

There are several new commands that system programmers have available to them to aid in the management of the SMDS. Below are a few of the most common: (you will use your own queue manager name)

/-QMA1 DIS  SMDSCONN(QMA1) CFSTRUCT(SHAREQ)

Display the status and availability to the SMDS by the queue manager.

Output:

CSQM201I –QMA1 CSQMDRTC  DIS SMDSCONN DETAILS
SMDSCONN(QMA1)
CFSTRUCT(SHAREQ)
OPENMODE(UPDATE)
STATUS(OPEN)
AVAIL(NORMAL)
EXPANDST(NORMAL)
 END SMDSCONN DETAILS

/-QMA1 START SMDSCONN(QMA1) CFSTRUCT(SHAREQ)

Start the connection to the SMDS from the queue manager. This will allocate and open
the SMDS if it is new, otherwise it automatically happens.

Output:

-QMA1 CSQMSSMD ' START SMDSCONN' NORMAL COMPLETION

The change you will see from the start will be shown by displaying the connection.

While running a job to load the queue, you will see expansion messages in the queue manager address space if it is allocating more space for the SMDS…this is much like normal page set expansion.

CSQE213I -QMA1 CSQEDSS2 SMDS(QMA1) CFSTRUCT(SHAREQ) 
data set MQVS.QM01.QMA1.SHAREQ.SMDS is now 98% full

CSQE239I -QMA1 CSQEDSS2 SMDS(QMA1) CFSTRUCT(SHAREQ) 
data set MQVS.QM01.QMA1.SHAREQ.SMDS has become full so new large 
messages can no longer be stored in it

CSQE212I -QMA1 CSQEDSI1 Formatting is complete for 
SMDS(QMA1) CFSTRUCT(SHAREQ) data set MQVS.QM01.QMA1.SHAREQ.SMDS

CSQE217I -QMA1 CSQEDSI1 Expansion of SMDS(QMA1) 
CFSTRUCT(SHAREQ) data set MQVS.QM01.QMA1.SHAREQ.SMDS was successful
 76860 pages added, total pages 128160

While this is happening, your job, or channels will be paused until the expansion completes.

Other SMDS commands: (for a complete list of parameters, see MQ Command Reference)

ALTER SDMS (qmgr|*) CFSTRUCT(cfname) DSBUFS(number)
     DSEXPAND(yes|no|)

DISPLAY SMDS (qmgr|*) CFSTRUCT(cfname) WHERE(filter)

DISPLAY SMDSCONN (qmgr|*) CFSTRUCT(cfname) WHERE(filter)

RESET SMDS (qmgr or *) CFSTRUCT(cfname)    ACCESS(enabled|disabled) STATUS(failed|recovered)

START SMDSCONN (qmgr or *) CFSTRUCT(cfname)
     CMDSCOPE(‘ ‘|qmgr|*)

STOP SMDSCONN (qmgr or *) CFSTRUCT(cfname)
     CMDSCOPE(‘ ‘|qmgr|*)

Optional parameter

Testing:

I had two CF structures established of the same size.  One was at Level 4, one was at Level 5 with a SMDS assigned to the CFSTRUCT.  I attempted to put 50,000 messages that were 40,000 bytes each.  In the Level 4 CFSTRUCT, I was able to load 4710 messages in the queue before the RC 2192 was returned stating that the CF was full.  In the Level 5 structure with the SMDS assigned, once the CF level reached 70%, the data started going to the SMDS and I was able to put all 50,000 messages on the shared queue.
























Saturday, May 21, 2011

MQ Version 7 "gotcha" on z/OS

I have gone over the "WebSphere MQ V7.0 Features and Enhancements", the "Migration Guide"....and this was not mentioned once. I did find reference to this in SupportPac MP1G...but it was after the fact and I had already converted to V7.01. 

In version 7.01 of MQ for z/OS....IBM changed the way messages are stored on the pageset.  If a message is under 2K....it is stored 1 message per page.  This was huge for one of my applications.....as they kept filling the pageset, and overflowing to the dead letter queue.  It took a little while to figure this out. Regardless of the design implications....this application would get data all day, and then process it at night.  Well it didn't take long to fill the pageset up to the 4GB limit.  These messages are 153 bytes each.......so we use a 4K page to store 153 bytes.  We went from approximately 20 messages per page....to one.

The solution was to allocate a new "extended" VSAM dataset to be able to go over the 4GB limit.  Once this was allocated and formatted......I then dynamically added the pageset to the queue manager:

-qmgr DEFINE PSID(9) BUFFPOOL(3) DSN('MQM.qmgr.PSID09') EXPAND(USER)

Then I created a new STORAGE CLASS called EXPANDED to use this pageset...and finally, I had the application process all of the data...stopped all incoming channels so the queue stayed empty, and then reassigned the queue to the new storage class.

Remember when you add a pageset dynamically, you will still need to change the PROC to include the new DD statement for the pageset, as well as to update the CSQINP1 input dataset to include the pageset to buffer pool entry:

DEFINE PSID( 09 ) BUFFPOOL( 3 )

as well as the input for CSQINP2, where you assign the storage class:

DEFINE STGCLASS( 'EXTENDED') +
       PSID( 09 )

There is also another work around that is talked about in the MP1G Performance SupportPac...and it is having this statement in your CSQINP2 DD statement:

REC QMGR (TUNE MAXSHORTMSGS 0)

This will have the queue manager store messages in the pre v7.01 way....more than one on a page.
It seems to me that this should have been pointed out in the migration guide, as well as mentioned in the features and enhancements book.  Having to search for information on this after the fact, was not really what I had in mind.