Cron Jobs
A solution is required which will integrate the usage of AIX "cron"
jobs and the CONTROL-M scheduler. Instead of "cron" directly running
jobs, it should try to insert the job into CONTROL-M. If the insertion
is successful, then the job would be run through CONTROL-M and provide
the monitoring functions required by Operations and Production Control.
If the insertion is unsuccessful, then an error should be logged through
the "Supervision" system, and the job should be executed. This series
of steps will insure that the job is run when it is supposed to be run,
and that it can be monitored through CONTROL-M, or that a standardized
error is generated if the job cannot be inserted into CONTROL-M. In
either case the job will be run when it is supposed to be run.
Several Items will need to be identified to be able to
schedule jobs through CONTROL-M. These items include:
- Datacenter
- Application
- Group
- Member Name
- Job Name
- Job Owner
- Node Group
- Job Description
- Job Command line
- Shout message in event of failure
The datacenter will normally be "FTW", since all
CONTROL-M datacenters are being consolidated into a
single server.
The "APPLICATION" will vary depending upon which machine or
application for which each job is running. For instance, all jobs run
on any of the OMS machines should have an application name of "OMS".
Likewise the jobs running on the EXE machines should have an application
name of "EXE". etc.
The "GROUP" should be descriptive of a job classification for each
job. For instance on the OMS machines, there will be System Health
Check jobs, Connect:Direct jobs, OMS jobs, product supply center
Specific jobs, etc. The GROUP name should reflect this division of job
types.
The "MEMNAME" should be a job specific identifier and be unique
enterprise wide. As an example, the MEMNAME of the weekly "mksysb" job
on the "ftwoms01" machine would be "FTWOMS01.MKSYSB".
The "JOBNAME" should also be a job specific identifier and be unique
enterprise wide. This is a more cryptic name and is limited to 10
characters. This may need to be centrally controlled and assigned on a
job-by-job basis.
The "JOBOWNER" will be the user ID under which each job should be
executed on each machine. This may be an individual user or a generic
user such as "mqm" or "dcoms01".
The "NODEGROUP" will be the DNS name of the machine on which each job
will run.
The "DESCRIPTION" will be a textual description of each job, where it
runs, and what it is supposed to do.
The "CMDLINE" is a full path command line of commands, options and
arguments which will be run on each machine for each job.
The "SHOUT" is a textual message which will be sent to the Operations
console in the event that a job does not complete successfully.
The following are example settings for these parameters for a system
health check job running on the machine ftwoms01:
DATACENTER="FTW"
APPLICATION="OMS"
GROUP="SystemHealth"
MEMNAME="FTWOMS01_CHKDISKS
JOBNAME="CHKDISK000"
OWNER="root"
NODEGRP="ftwoms01"
DESCRIPTION="Check the file system free space and report any problems"
CMDLINE="/home/bin/ecsrun /home/bin/chkdisks"
SHOUT="There is a file system free space problem on ftwoms01"
The following is a sample script which will perform the function
described in this document.
#!/usr/dt/bin/dtksh
################################################################
export FPATH="/$( uname -n )/korn"
export CONTROLM="/ftwdqa02/bmc/ctmagent/ctm"
################################################################
# The ctmcreate parameters are defined here
CTMCREATE="/ftwdqa02/bmc/ctmagent/ctm/exe_AIX/ctmcreate"
MEMNAME="tq2file_trigger"
GROUP="TESTHUB"
APPLICATION="MQSeries"
DATACENTER="FTW"
OWNER="mqm"
JOBNAME="MQcontrolm"
NODEGRP="ftwdqa02"
DESCRIPTION="MQSeries/CONTROL-M integration testing"
CMDLINE="/home/bin/ecsrun /home/mjohn00/tq2file/tq2file /home/mjohn00/tq2file/tq2file.cfg"
SHOUT="THIS IS ONLY A TEST. NO ACTION IS NECESSARY."
EXITCODE="0"
################################################################
# The ctmcreate command is executed here
${CTMCREATE} \
-TASKTYPE COMMAND \
-GROUP "${GROUP}" \
-APPLICATION "${APPLICATION}" \
-NODEGRP "${NODEGRP}" \
-MEMNAME "${MEMNAME}" \
-JOBNAME "${JOBNAME}" \
-OWNER "${OWNER}" \
-DESCRIPTION "${DESCRIPTION}" \
-SHOUT NOTOK ECS R "${SHOUT}" \
-CMDLINE "${CMDLINE}"
STATUS="${?}"
################################################################
# If the ctmcreate command fails to insert a job into
# CONTROL-M, error messages are sent to the "supervision"
# monitoring system using the "logmessage" function.
if [[ "_${STATUS}" = "_" ]]
then
logmessage -s "$(uname -n)" -w -p -n "opensys" "THIS IS ONLY A TEST, NO ACTION IS NECESSARY. MQSeries/CONTROL-M Integration testing script failed to provide a valid return code while inserting a job into CONTROL-M."
EXITCODE="1"
fi
if [[ "_${STATUS}" != "_" && ${STATUS} -ne 0 ]]
then
logmessage -s "$(uname -n)" -w -p -n "opensys" "THIS IS ONLY A TEST, NO ACTION IS NECESSARY. MQSeries/CONTROL-M Integration testing script failed to insert a job into CONTROL-M."
EXITCODE="2"
fi
exit ${EXITCODE}
|