TSM Customer Onboarding Template — Bluechip

REUSABLE TEMPLATE v2 · shared pool
Template Variables — Substitute For Each New Customer
CUSTOMERBLUECHIP
DOMAINBLUECHIP_DOM
PREFIXBC0
LPAR_LISTBC01, BC02, BC03
FILE_RETENTION30 days
MFULL_RETENTION180 days (6 months)
YFULL_RETENTION2555 days (7 years)
PRIMARY_POOLDCPOOL_PRIMARY
DAILY_START22:00
MFULL_START20:00 (1st of month)
YFULL_START18:00 (1st January)
Every BLUECHIP / BC0X / BC0X_* token in the commands below is a substitution point. To onboard ACME with LPARs AC01/AC02/AC03: bulk replace BLUECHIPACME and BC0AC0, then adjust retentions in Phase 3 if different. Storage pool, schedules, and tiering rule remain shared and need no change.
Customer
Bluechip
Domain
BLUECHIP_DOM
LPARs
3 × AIX (BC01/02/03)
Nodes
9 (3 per LPAR)
Mgmt classes
3 (FILE / MFULL / YFULL)
Storage pool
DCPOOL_PRIMARY (shared)
Tiering
Auto via STGRULE
Retentions
30d / 6m / 7y
Phase 1 Pre-flight — Naming Collision Check
01 Confirm no existing objects clash with the customer namespace PREFLIGHT
# All queries below MUST return zero rows / "object not found" before proceeding QUERY DOMAIN BLUECHIP_DOM QUERY NODE BC0* QUERY SCHEDULE BLUECHIP_DOM * QUERY CLOPTSET BLUECHIP_* # Confirm shared pool exists and is healthy (it should already exist from architecture build) QUERY STGPOOL DCPOOL_PRIMARY F=D | grep -iE "status|util"

Why this matters: TSM/SP object names are global within their type. A clash on a node name with another customer would fail the define, or worse, attach the new client to an existing node's filespace — extremely difficult to unwind cleanly. This 5-second check has saved hours of cleanup more than once.

Phase 2 Define Policy Domain
02 Create the customer policy domain BUILD
DEFINE DOMAIN BLUECHIP_DOM \ DESCRIPTION="Bluechip - 3 AIX LPARs - daily incr / monthly full / yearly full" \ BACKRETENTION=30 \ ARCHRETENTION=365 # Verify QUERY DOMAIN BLUECHIP_DOM F=D

BACKRETENTION / ARCHRETENTION are domain-level fallback retentions used only when objects cannot bind to a management class (e.g. an MC is deleted while objects still reference it). The real retention controls live on the copy groups in Phase 3 — these are belt-and-braces.

Phase 3 Build Policy Set, Management Classes, Activate
03 Policy set + 3 management classes + copy groups + activate RETENTION
Policy set (always called STANDARD by convention) DEFINE POLICYSET BLUECHIP_DOM STANDARD \ DESCRIPTION="Bluechip policy v1" MC #1 — File-level incremental, 30-day retention DEFINE MGMTCLASS BLUECHIP_DOM STANDARD MC_FILE_30D \ DESCRIPTION="Daily file-level incremental — 30-day retention" DEFINE COPYGROUP BLUECHIP_DOM STANDARD MC_FILE_30D STANDARD \ TYPE=BACKUP \ DESTINATION=DCPOOL_PRIMARY \ VEREXISTS=NOLIMIT \ VERDELETED=1 \ RETEXTRA=30 \ RETONLY=30 \ MODE=MODIFIED \ SERIALIZATION=SHRSTATIC MC #2 — Monthly full, 6-month retention (180d / 6 versions) DEFINE MGMTCLASS BLUECHIP_DOM STANDARD MC_MFULL_6M \ DESCRIPTION="Monthly full — 180-day retention (6 versions)" DEFINE COPYGROUP BLUECHIP_DOM STANDARD MC_MFULL_6M STANDARD \ TYPE=BACKUP \ DESTINATION=DCPOOL_PRIMARY \ VEREXISTS=6 \ VERDELETED=2 \ RETEXTRA=180 \ RETONLY=180 \ MODE=ABSOLUTE \ SERIALIZATION=SHRSTATIC MC #3 — Yearly full, 7-year retention (2555d / 7 versions) DEFINE MGMTCLASS BLUECHIP_DOM STANDARD MC_YFULL_7Y \ DESCRIPTION="Yearly full — 2555-day retention (7 versions)" DEFINE COPYGROUP BLUECHIP_DOM STANDARD MC_YFULL_7Y STANDARD \ TYPE=BACKUP \ DESTINATION=DCPOOL_PRIMARY \ VEREXISTS=7 \ VERDELETED=2 \ RETEXTRA=2555 \ RETONLY=2555 \ MODE=ABSOLUTE \ SERIALIZATION=SHRSTATIC Set the default management class ASSIGN DEFMGMTCLASS BLUECHIP_DOM STANDARD MC_FILE_30D Validate before activating — fail-fast on policy errors VALIDATE POLICYSET BLUECHIP_DOM STANDARD # Expect: "ANR1515I Policy set STANDARD validated" Activate ACTIVATE POLICYSET BLUECHIP_DOM STANDARD Confirm active policy is what we expect QUERY POLICYSET BLUECHIP_DOM ACTIVE QUERY MGMTCLASS BLUECHIP_DOM ACTIVE QUERY COPYGROUP BLUECHIP_DOM ACTIVE STANDARD F=D
Retention parameters at a glance:
Mgmt classVEREXISTSVERDELETEDRETEXTRARETONLYMODE
MC_FILE_30DNOLIMIT13030MODIFIED
MC_MFULL_6M62180180ABSOLUTE
MC_YFULL_7Y7225552555ABSOLUTE

Reading the parameters: VEREXISTS = how many versions to keep while the file still exists. VERDELETED = how many versions to keep after the file is deleted. RETEXTRA = days to keep a version after it's been superseded. RETONLY = days to keep the last remaining version after the file is deleted. MODE=ABSOLUTE on full schedules forces unchanged files to re-send each cycle so they get re-bound to the long-retention MC.

7 yearly versions is intentional. With VEREXISTS=7 and RETEXTRA=2555, after 7 years of yearly fulls you'll hold all 7 simultaneously. The 8th yearly will start expiring the oldest. If your retention requirement is strictly 7 years total (not "the most recent 7 yearlies"), set VEREXISTS=7 and rely on RETEXTRA for time-based aging — the configuration above does exactly this.

Activate is one-way without rollback. Once ACTIVATE POLICYSET runs, the active policy replaces what was active before. Always VALIDATE first. To revert, you re-edit the inactive policy set and re-activate.

Phase 4 Client Option Sets — Bind Data to Correct MC
04 Define server-pushed include rules per backup purpose BIND
Customer-prefixed cloptsets — keeps namespace clean across customers DEFINE CLOPTSET BLUECHIP_COPT_FILE \ DESCRIPTION="Bluechip — bind data to MC_FILE_30D" DEFINE CLIENTOPT BLUECHIP_COPT_FILE INCLEXCL "INCLUDE /.../* MC_FILE_30D" DEFINE CLOPTSET BLUECHIP_COPT_MFULL \ DESCRIPTION="Bluechip — bind data to MC_MFULL_6M" DEFINE CLIENTOPT BLUECHIP_COPT_MFULL INCLEXCL "INCLUDE /.../* MC_MFULL_6M" DEFINE CLOPTSET BLUECHIP_COPT_YFULL \ DESCRIPTION="Bluechip — bind data to MC_YFULL_7Y" DEFINE CLIENTOPT BLUECHIP_COPT_YFULL INCLEXCL "INCLUDE /.../* MC_YFULL_7Y" Verify QUERY CLOPTSET BLUECHIP_COPT_* QUERY CLIENTOPT BLUECHIP_COPT_FILE QUERY CLIENTOPT BLUECHIP_COPT_MFULL QUERY CLIENTOPT BLUECHIP_COPT_YFULL

Why customer-prefixed cloptsets: cloptset names are global. Without the customer prefix, two customers with similar requirements would end up sharing a cloptset, which makes per-customer changes risky. Prefix them and each customer gets their own isolated set even though they bind to identically-named MCs (which are scoped to the domain, so naming clashes are fine across domains).

Phase 5 Node Registration
05 Register all 9 nodes — bound to domain and cloptset REGISTER
File-level nodes (3) — daily incremental → MC_FILE_30D REGISTER NODE BC01_FILE <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_FILE \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC01 — daily file-level incremental" REGISTER NODE BC02_FILE <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_FILE \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC02 — daily file-level incremental" REGISTER NODE BC03_FILE <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_FILE \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC03 — daily file-level incremental" Monthly-full nodes (3) → MC_MFULL_6M REGISTER NODE BC01_MFULL <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_MFULL \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC01 — monthly full" REGISTER NODE BC02_MFULL <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_MFULL \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC02 — monthly full" REGISTER NODE BC03_MFULL <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_MFULL \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC03 — monthly full" Yearly-full nodes (3) → MC_YFULL_7Y REGISTER NODE BC01_YFULL <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_YFULL \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC01 — yearly full" REGISTER NODE BC02_YFULL <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_YFULL \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC02 — yearly full" REGISTER NODE BC03_YFULL <PASSWORD> \ DOMAIN=BLUECHIP_DOM CLOPTSET=BLUECHIP_COPT_YFULL \ COMPRESSION=NO MAXNUMMP=4 \ CONTACT="Bluechip BC03 — yearly full" Verify all 9 nodes QUERY NODE BC0* F=D SELECT node_name,domain_name,cloptset_name FROM nodes WHERE node_name LIKE 'BC0%'

COMPRESSION=NO at register: the shared DCPOOL_PRIMARY is a directory-container pool with inline compression. Client-side compression on top of that is wasted CPU on the AIX LPAR and shreds dedup matching (compressed bytes don't dedup against uncompressed equivalents from another customer). Always set COMPRESSION=NO when the destination is a container pool with inline compression enabled.

MAXNUMMP=4: allows up to 4 concurrent mount points per node, paired with client-side RESOURCEUTILIZATION for multi-stream backups. <PASSWORD> placeholder is set per node at register; on first client login with passwordaccess generate in dsm.sys, the client rotates it and writes to the local TSM password store.

Phase 6 Schedules & Node Associations
06 Define 3 schedules and associate the right nodes to each SCHEDULE
Daily incremental — every day, 22:00 DEFINE SCHEDULE BLUECHIP_DOM SCH_DAILY_INCR \ DESCRIPTION="Bluechip - daily file-level incremental" \ ACTION=INCREMENTAL \ OPTIONS="-domain=all-local" \ STARTDATE=TODAY+1 STARTTIME=22:00 \ SCHEDSTYLE=CLASSIC \ PERIOD=1 PERUNITS=DAYS \ DURATION=2 DURUNITS=HOURS \ PRIORITY=5 Monthly full — 1st of every month, 20:00, mode=absolute DEFINE SCHEDULE BLUECHIP_DOM SCH_MONTHLY_FULL \ DESCRIPTION="Bluechip - monthly full (mode=absolute)" \ ACTION=INCREMENTAL \ OPTIONS="-mode=absolute -domain=all-local" \ SCHEDSTYLE=ENHANCED \ MONTH=ANY DAYOFMONTH=1 \ STARTTIME=20:00 \ DURATION=4 DURUNITS=HOURS \ PRIORITY=3 Yearly full — 1st January, 18:00, mode=absolute DEFINE SCHEDULE BLUECHIP_DOM SCH_YEARLY_FULL \ DESCRIPTION="Bluechip - yearly full (mode=absolute)" \ ACTION=INCREMENTAL \ OPTIONS="-mode=absolute -domain=all-local" \ SCHEDSTYLE=ENHANCED \ MONTH=JANUARY DAYOFMONTH=1 \ STARTTIME=18:00 \ DURATION=6 DURUNITS=HOURS \ PRIORITY=2 Associate nodes with schedules DEFINE ASSOCIATION BLUECHIP_DOM SCH_DAILY_INCR \ BC01_FILE,BC02_FILE,BC03_FILE DEFINE ASSOCIATION BLUECHIP_DOM SCH_MONTHLY_FULL \ BC01_MFULL,BC02_MFULL,BC03_MFULL DEFINE ASSOCIATION BLUECHIP_DOM SCH_YEARLY_FULL \ BC01_YFULL,BC02_YFULL,BC03_YFULL Verify schedules and associations QUERY SCHEDULE BLUECHIP_DOM F=D QUERY ASSOCIATION BLUECHIP_DOM

1st January overlap: on Jan 1 each year, all three schedules fire — daily at 22:00, monthly at 20:00 (1st of month), yearly at 18:00 (1st Jan). Three sequential ingest waves on the same LPAR within 4 hours. If network/throughput is constrained, offset the monthly to DAYOFMONTH=2 to give yearly clear air. Document the choice either way.

Phase 7 End-to-End Verification
07 Walk the full object graph and confirm everything ties together VERIFY
Policy layer QUERY DOMAIN BLUECHIP_DOM F=D QUERY POLICYSET BLUECHIP_DOM ACTIVE F=D QUERY MGMTCLASS BLUECHIP_DOM ACTIVE F=D QUERY COPYGROUP BLUECHIP_DOM ACTIVE STANDARD F=D Cloptsets & nodes — confirm all 9 nodes bound correctly QUERY CLOPTSET BLUECHIP_COPT_* SELECT node_name,domain_name,cloptset_name,maxmp,contact \ FROM nodes WHERE node_name LIKE 'BC0%' ORDER BY node_name Schedules and associations QUERY SCHEDULE BLUECHIP_DOM F=D QUERY ASSOCIATION BLUECHIP_DOM End-to-end count check SELECT COUNT(*) AS "Bluechip nodes" FROM nodes WHERE domain_name='BLUECHIP_DOM' SELECT COUNT(*) AS "Bluechip schedules" FROM client_schedules WHERE domain_name='BLUECHIP_DOM' SELECT COUNT(*) AS "Bluechip MCs" FROM mgmtclasses WHERE domain_name='BLUECHIP_DOM' AND set_name='ACTIVE' # Expected: 9 nodes, 3 schedules, 3 MCs

Server-side build complete. The 3 LPARs cannot back up yet — the AIX clients still need dsm.sys / dsm.opt with TCPSERVERADDRESS, NODENAME, PASSWORDACCESS, and the dsmcad scheduler started. Three nodes per LPAR means three stanzas in dsm.sys and three dsmcad instances (or three named-server invocations).

Phase 8 SQL Reporting — Backup Job Status
08 Daily backup result query — what failed, what missed, what completed REPORT
events table — schedule status codes you'll see # Completed — schedule ran and succeeded # Failed — client reported a non-zero RC # Missed — schedule window passed and the client never connected # Severed — connection dropped mid-backup # Started — currently running # Future — schedule defined for a future window, not yet due Query 1 — last 24h schedule outcomes for this customer SELECT \ schedule_name AS "Schedule", \ node_name AS "Node", \ status AS "Status", \ actual_start AS "Started", \ completed AS "Finished", \ result AS "RC" \ FROM events \ WHERE domain_name = 'BLUECHIP_DOM' \ AND scheduled_start > (current_timestamp - 24 hours) \ ORDER BY scheduled_start DESC, node_name Query 2 — only failures & missed in the last 24h (the alert query) SELECT \ node_name AS "Node", \ schedule_name AS "Schedule", \ status AS "Status", \ result AS "RC", \ reason AS "Reason" \ FROM events \ WHERE domain_name = 'BLUECHIP_DOM' \ AND status IN ('Failed','Missed','Severed') \ AND scheduled_start > (current_timestamp - 24 hours) \ ORDER BY scheduled_start DESC Query 3 — bytes transferred + duration per node (last 7 days) # summary table holds completed-activity stats; entity = node, activity = type SELECT \ entity AS "Node", \ activity AS "Type", \ DATE(start_time) AS "Date", \ bytes/1024/1024/1024 AS "GB", \ affected AS "Files", \ (end_time-start_time) SECONDS AS "Runtime", \ successful AS "OK?" \ FROM summary \ WHERE entity LIKE 'BC0%' \ AND activity = 'BACKUP' \ AND start_time > (current_timestamp - 7 days) \ ORDER BY entity, start_time DESC Query 4 — last successful backup per node (gaps = stale data!) SELECT \ node_name AS "Node", \ MAX(actual_start) AS "Last Run", \ (current_timestamp - MAX(actual_start)) DAYS AS "Days Ago" \ FROM events \ WHERE domain_name = 'BLUECHIP_DOM' \ AND status = 'Completed' \ GROUP BY node_name \ ORDER BY node_name Query 5 — per-node occupancy (logical vs stored = dedup ratio) SELECT \ node_name AS "Node", \ filespace_name AS "Filespace", \ num_files AS "Files", \ logical_mb/1024 AS "Logical GB", \ reporting_mb/1024 AS "Stored GB" \ FROM occupancy \ WHERE node_name LIKE 'BC0%' \ ORDER BY node_name, filespace_name
Status code reference for the events table:
StatusMeaningAction
CompletedSchedule succeededNothing
FailedClient started but reported non-zero RCCheck client dsmerror.log
MissedWindow expired without client connectingCheck dsmcad service / network
SeveredConnection dropped mid-backupLikely network or client crash
StartedCurrently runningWait or check QUERY SESS
FutureSchedule not yet dueNothing
PendingIn window, client not yet connectedWatch — may transition
InProgressClient connected, scheduling phaseWatch
09 Cross-customer dashboard query (multi-tenant view) REPORT
Per-customer success/failure summary, last 24h SELECT \ domain_name AS "Customer", \ SUM(CASE WHEN status='Completed' THEN 1 ELSE 0 END) AS "OK", \ SUM(CASE WHEN status='Failed' THEN 1 ELSE 0 END) AS "Failed", \ SUM(CASE WHEN status='Missed' THEN 1 ELSE 0 END) AS "Missed", \ SUM(CASE WHEN status='Severed' THEN 1 ELSE 0 END) AS "Severed", \ COUNT(*) AS "Total" \ FROM events \ WHERE scheduled_start > (current_timestamp - 24 hours) \ AND status <> 'Future' \ GROUP BY domain_name \ ORDER BY domain_name Per-customer occupancy & growth (logical vs stored) SELECT \ SUBSTR(node_name,1,4) AS "Customer Pfx", \ COUNT(DISTINCT node_name) AS "Nodes", \ SUM(num_files) AS "Files", \ SUM(logical_mb)/1024 AS "Logical GB", \ SUM(reporting_mb)/1024 AS "Stored GB" \ FROM occupancy \ GROUP BY SUBSTR(node_name,1,4) \ ORDER BY 1

The "Stored GB" column reflects per-node attribution after dedup. Sum of all customers' "Stored GB" should be close to the actual pool occupancy. Discrepancies are usually cross-customer dedup savings — files that exist on multiple customer nodes are stored once but logically attributed to each.

10 Wrap dsmadmc in a non-interactive shell call (for cron / monitoring) AUTOMATE
Pattern — dsmadmc one-shot SQL with stable output dsmadmc -id=admin -password=<pwd> \ -dataonly=yes -displaymode=table \ "SELECT node_name, schedule_name, status, result" \ "FROM events WHERE domain_name='BLUECHIP_DOM'" \ "AND scheduled_start > (current_timestamp - 24 hours)" \ "AND status IN ('Failed','Missed','Severed')" Common flags for scripted use # -dataonly=yes suppress headers & banners (dataonly not displaymode) # -displaymode=table preserve column alignment # -displaymode=list one field per line — easier for awk/grep parsing # -outfile=/tmp/x.txt write to file rather than stdout # -comma comma-separated output for CSV ingest # -tab tab-separated output CSV-friendly variant — pipe straight into anything dsmadmc -id=admin -password=<pwd> -dataonly=yes -comma \ "SELECT node_name,status,result,actual_start FROM events" \ "WHERE domain_name='BLUECHIP_DOM' AND scheduled_start > (current_timestamp - 24 hours)" \ > /tmp/bluechip_24h.csv Exit-code-driven alerting (for cron + email) #!/bin/ksh # Runs at 06:00 daily — alerts on any failure/miss in last 24h PWD="$(cat /etc/sp/admin.pw)" OUT=$(dsmadmc -id=admin -password=$PWD -dataonly=yes -comma \ "SELECT COUNT(*) FROM events WHERE domain_name='BLUECHIP_DOM' AND scheduled_start > (current_timestamp - 24 hours) AND status IN ('Failed','Missed','Severed')") if [ "$OUT" -gt 0 ]; then dsmadmc -id=admin -password=$PWD -dataonly=yes \ "SELECT node_name,schedule_name,status,result FROM events WHERE domain_name='BLUECHIP_DOM' AND scheduled_start > (current_timestamp - 24 hours) AND status IN ('Failed','Missed','Severed')" \ | mailx -s "Bluechip SP failures: $OUT" backups@example.com fi

Don't put the admin password in cron-readable scripts. Either use a dsmadmc wrapper that pulls from a 600-permissioned file (as above), or — better — generate an admin password file with dsmadmc -id=admin -password=<pw> -genpwfile and reference it with -passwordaccess=generate. For scripted operations, register a dedicated read-only admin (e.g. SP_REPORT) with SYSTEM privilege restricted to QUERY/SELECT via SP's command authority controls.

Output format hierarchy for parsing ease: -comma > -tab > -displaymode=table > -displaymode=list. Use -comma for anything you'll parse with awk/perl/python; use -displaymode=table for human-readable email reports.

Quick Reference Step Summary
CommandPurposeNotes
QUERY DOMAIN/NODE/SCHEDULE/CLOPTSETPre-flight collision checkAll return zero rows
DEFINE DOMAIN BLUECHIP_DOMCustomer-scoped policy domainHolds 1 policy set, 3 MCs
DEFINE POLICYSET STANDARDInactive policy set under domainAlways called STANDARD
DEFINE MGMTCLASS × 3FILE_30D / MFULL_6M / YFULL_7YOne per backup purpose
DEFINE COPYGROUP × 3Retention parameters per MCDestination = DCPOOL_PRIMARY
ASSIGN DEFMGMTCLASS … MC_FILE_30DDefault MC for unbound dataRequired before activate
VALIDATE/ACTIVATE POLICYSETValidate then promote to ACTIVEActivate is one-way
DEFINE CLOPTSET × 3 + CLIENTOPTServer-pushed include rulesCustomer-prefixed names
REGISTER NODE × 9BC0{1,2,3}_{FILE,MFULL,YFULL}COMPRESSION=NO for container pool
DEFINE SCHEDULE × 3Daily / monthly / yearlymode=absolute on full schedules
DEFINE ASSOCIATION × 3Bind nodes to schedulesBy purpose suffix
SELECT … FROM events WHERE status…Daily backup result reportingPhase 8 SQL queries
SELECT … FROM summary WHERE entity LIKEBytes/files transferred per node7-day window typical
SELECT … FROM occupancyPer-node logical vs storedCross-customer dedup view
⚠ Key Notes