Worked example: All commands assume lpar01 as the client (already a defined NIM machine object on dr_nim_01), hdisk0 = live 7.1 rootvg, hdisk1 = unassigned spare, lpp_source lpp_aix72tl5sp10, SPOT spot_aix72tl5sp10, working VG nimadmvg, media path /export/nim/media/AIX_7.2_TL5_SP10/. Sub in your own values throughout.
Out of scope: new installations / "preservation" installs · PowerHA / HACMP-clustered nodes (cluster must be quiesced; migrate one node at a time per IBM's PowerHA migration procedure) · LPAR migrations (LPM is unrelated to nimadm). For 7.1 → 7.3 the procedure is identical — use a 7.3 NIM master and target a 7.3 lpp_source.
SP version drift: SP versions evolve. Confirm the latest 7.2 TL5 SP available on IBM Entitled Software Support (ESS) and target that — there is no benefit to migrating to an older SP and then patching forward.
oslevel -s on master must be ≥ 7200-05-10.Third-party software compatibility matters more here than for an SP. Verify against AIX 7.2 TL5:
| Component | What to check |
|---|---|
| TSM / IBM Spectrum Protect client | Per IBM support matrix for AIX 7.2 TL5 SPx supported client versions. Older clients may need upgrading post-migration. |
| PowerHA / HACMP | Hard version dependency. Do not migrate clustered nodes ad-hoc. |
| Oracle, DB2, SAP | Each has its own AIX 7.2 certification matrix. |
| Kernel extensions | genkex shows the loaded list — anything 3rd-party must be re-validated. |
| Java, OpenSSL, OpenSSH | Ship at newer baselines on 7.2. |
| Monitoring / AV agents | ITM, Nagios, Tanium, etc. — confirm with vendor. |
Skip-ahead: If lpp_source and SPOT already exist for another client at the same target level, jump to step 06.
Hard rule: NIM master must be at or above the target client level. A 7.1 master cannot migrate a client to 7.2 — there's no "upgrade" path mid-flight. If the master is below target, upgrade the master first.
The 7.2 install media from ESS comes as one or more ISOs (typically AIX_7200-05-10_DVD_1_of_2.iso and _2_of_2.iso).
✓ Verify: SHA256 from csum matches the value listed on the ESS download page for each ISO. A bad ISO becomes a corrupt lpp_source — catch it at the doorstep.
The lpp_source is the fileset repository — a directory of .bff files extracted from the install media. ISO 1 builds the initial lpp_source; ISO 2's contents are layered in via bffcreate.
simages = yes is non-negotiable. Without it, nimadm refuses to use this lpp_source for migration. If the attribute is absent, the lpp_source is incomplete — confirm both ISOs were processed and that inutoc ran successfully against the merged tree.
The SPOT (Shared Product Object Tree) is a bootable mini-image used by NIM operations. Built from the lpp_source.
Tip: Keep the old 7.1 SPOT around until burn-in is complete — you'll need it for the worst-case bos_inst rollback (see RB.3).
nimadm needs scratch space on the NIM master to NFS-export and manipulate the client's cloned rootvg. Must be a separate VG (not rootvg) with at least 2× the client's rootvg size in free PPs.
Sizing: Free GB = FREE PPs × PP SIZE / 1024. Need ≥ 2× the client rootvg used capacity. Skimping here causes phase 7 failures partway through filest migration — and phase 7 is the longest phase.
✓ Verify: nim -o lslpp lpar01 returns data from the client. If it errors with 0042-006 or similar, fix bidirectional name resolution and NIM port reachability before continuing.
Capture everything: Wrap the whole pre-flight in script /tmp/preflight_$(date +%Y%m%d).log on the client.
Stop condition: If lppchk -v reports issues, remediate before migration. A migration that inherits a broken 7.1 fileset state will not produce a clean 7.2 — the breakage is carried into the new build, sometimes amplified.
Anything matched in the second command is worth confirming with the application team before migration. perfagent.tools in particular has historically caused post-migration noise.
| Filesystem | Min free | Why |
|---|---|---|
| / | 100 MB | Bootloader, ODM |
| /usr | 3 GB | 7.2 fileset footprint is materially larger than 7.1 |
| /var | 1 GB | nimadm and migration logs |
| /tmp | 1 GB | installp scratch |
| /opt | 500 MB | Optional product trees |
✓ Verify: hdisk1 shows None in the VG column · bootinfo -s hdisk1 ≥ bootinfo -s hdisk0 · all LVs in lsvg -l rootvg are open/syncd.
Diff baseline post-migration. Kernel extensions matter especially — anything 3rd-party that doesn't reload on 7.2 is the migration's most common silent-failure mode.
A NIM-resident mksysb gives you the option to recover the LPAR via nim -o bos_inst if both the live rootvg and the migrated clone become unbootable.
Pull, not push: This pulls a mksysb from the client to the master in one operation. Runtime depends on rootvg size and network speed (allow 30 — 90 min).
If a NIM-resident mksysb is impractical, take a local mksysb to NFS.
Stale VG metadata: If hdisk1 shows old_rootvg, altinst_rootvg, or another stale VG from a previous operation, clean it first:
Do not pre-clone with alt_disk_copy. nimadm creates its own clone of rootvg on hdisk1 — it is the clone operation. Taking alt_disk_copy on top would just consume another disk for no benefit.
✓ Acceptance: lpp_source has simages = yes · SPOT exists with no errors · client object resolves · nimadmvg has ≥ 2× client rootvg in free PPs.
nimadm runs in 12 phases (see step 20). You can preview without committing — phase 1 is the validation phase, it checks all prerequisites without modifying anything on the client. Useful for catching configuration errors before the real run.
-P N: Runs only phase N. Cheap to do, catches misconfigured master/client/lpp_source before you sink hours into a real run.
| Flag | Purpose |
|---|---|
| -c | Client (NIM machine object name) |
| -s | SPOT |
| -l | lpp_source |
| -j | VG on NIM master for working storage |
| -d | Destination disk on the client |
| -Y | Accept software licenses non-interactively |
Total runtime: 90 — 180 min depending on rootvg size, network speed, and NIM master CPU.
Common failure modes: insufficient space (phase 7), broken filesets inherited from the client (phase 1 or 7), NFS export problems (phase 4 — check firewalls and /etc/exports on the master). The error at the top of the failure usually states which phase to restart from with -r or -P N.
In a separate session on the client, watch for the alt rootvg appearing and the NFS mount lifecycle.
The nimadm log ends with:
State at this point: the client is still running 7.1 from hdisk0, but is configured to boot 7.2 from hdisk1 on next reboot. The cutover is the reboot in Phase 6.
Follow your application shutdown runbook. Database engines, JVMs, and middleware should be stopped cleanly before reboot — orphan transactions on a 7.1 socket that comes up under 7.2 are exactly the kind of subtle bug that takes weeks to chase.
Have an HMC console open and visible during the reboot — first boot off a migrated rootvg is the moment things go sideways and you want to see LED codes immediately, not via the SSH timeout.
The first boot off the migrated 7.2 rootvg may:
TERM prompt before login — enter vt100 or xterm and continue. Subsequent boots will not prompt.errpt during initial config — review but expect some noise.Don't panic: Any of the above on the first boot is expected. Repeated occurrences on the second boot are not.
✓ Acceptance: oslevel at target · lppchk clean · ML found · no inoperative subsystems · bootlist on hdisk1 · hdisk0 shows as old_rootvg · kernel extensions reloaded with no surprising drops.
Hand off to application owners for service validation. Treat the change as in-progress until they sign off. Keep the rollback warm — don't run any of the cleanup steps in Phase 8 yet.
Trigger conditions: client fails to boot off the migrated rootvg · lppchk -v reports broken filesets post-migration · new error classes in errpt linked to the migration · application owners report a regression that cannot be remediated in the maintenance window.
hdisk0 (the original disk) → "2. Normal Mode Boot".hdisk0 (original 7.1 rootvg).Then follow the RB.1 verification to confirm oslevel -s reports the pre-migration level and the bootlist sticks.
If neither disk boots:
This requires a 7.1 SPOT on the NIM master in addition to the 7.2 one created in step 04 — keep the old 7.1 SPOT around until burn-in is complete.
Preserve for IBM support:
/var/log/nim/nimadm_lpar01_*.log on the master/var/adm/ras/nim.installp on the clienterrpt -a output from the failed boot attemptsOnly after the migrated system has run cleanly and application owners have signed off.
Mirroring is the high-value cleanup option: after burn-in, mirror the new 7.2 rootvg onto hdisk0 so you have redundancy on both disks again. Skip if your storage layer already provides redundancy.
Update the change ticket with completion timestamp and close. You earned it.
| Phase | Common failure | Remediation |
|---|---|---|
| 1 | Client unreachable | Check network, ping, rpcinfo |
| 1 | lpp_source missing simages=yes | Re-add ISO 2 contents, inutoc, nim -o check |
| 1 | Client lppchk -v not clean | Fix broken filesets on 7.1 first |
| 3 | Spare disk not free / wrong size | Confirm hdisk1 has VG=None and size ≥ hdisk0 |
| 4 | NFS mount fails | Firewall between client and master, /etc/exports |
| 7 | "Not enough space in /usr" (alt rootvg) | Increase rootvg size or use larger target disk |
| 7 | Fileset prereq missing in lpp_source | Add missing fileset to lpp_source, inutoc, restart |
| 9 | bosboot fails | Usually disk error; check errpt on client |
| 11 | Cannot wake alt_disk_install | Manual alt_rootvg_op -W -d hdisk1 on client |
Restart options: If nimadm fails mid-run, it can usually be restarted from a specific phase using -r (resume) or -P N (run from phase N). Read the error message at the top of the failure for the suggested restart.
Do not follow this blindly — values (level numbers, hostnames, hdisk names, dates) will change. This is a sequence reference, not a copy-paste recipe.
| Command | Purpose | Notes |
|---|---|---|
| oslevel -s (master) | Confirm NIM master at ≥ target level | Hard gate — 7.1 master can't migrate to 7.2 |
| nim -o define -t lpp_source | Register the AIX 7.2 fileset repository | Phase 2 step 03 — needs simages=yes |
| bffcreate -d ... -t ... | Layer ISO 2 filesets into the lpp_source | After ISO 1 has been defined |
| inutoc <dir> | Build .toc for the lpp_source | Re-run after every directory change |
| nim -o check <resource> | Re-validate an lpp_source or SPOT | Required after inutoc |
| nim -o define -t spot | Build the bootable SPOT mini-image | Phase 2 step 04 — 15-30 min |
| nim -o define -t mksysb -a mk_image=yes | Pull mksysb from client to master | Phase 4 step 14 — preferred backup |
| nimadm -c <client> -P 1 ... | Phase-1-only validation dry run | Cheap insurance before the real run |
| nimadm -c ... -s ... -l ... -j ... -d ... -Y | Run the full migration | Phase 5 step 19 — 90-180 min |
| nimadm -B ... | "BOS only" mode — skip non-BOS filesets | Faster but skips third-party filesets |
| bootlist -m normal -o | Confirm bootlist after migration | Should show hdisk1 post-Phase 12 |
| oslevel -s (client, post-reboot) | Confirm migration succeeded | Expect 7200-05-10-XXXX |
| lppchk -v | Verify fileset consistency | MUST be clean pre and post |
| diff /tmp/genkex_pre.txt <(genkex) | Confirm kernel extensions reloaded | Phase 6 step 27 — silent failure mode |
| bootlist -m normal hdisk0 | Roll back to original 7.1 rootvg | RB.1 — old_rootvg becomes live again |
| nim -o bos_inst -a source=mksysb ... | Worst-case mksysb restore | RB.3 — needs 7.1 SPOT preserved |
| alt_rootvg_op -X old_rootvg | Free hdisk0 after burn-in | Phase 8 step 29 — only after sign-off |
| nim -o remove <object> | Delete NIM objects post-cleanup | Phase 8 step 30 |
| nim -o reset <client> | Clear stuck NIM state on a client | Use if a failed nimadm leaves the client wedged |
simages = yes on the lpp_source is non-negotiable. Without it, nimadm refuses to use the lpp_source. If absent, the lpp_source is incomplete — re-add ISO 2's filesets via bffcreate, run inutoc, and nim -o check.nimadm -P 1 as a dry run before the real migration. Phase 1 is the validation phase — it catches misconfigured master/client/lpp_source without modifying the client.alt_disk_copy. nimadm is the clone operation; layering alt_disk_copy on top just consumes another disk for no benefit.bos_inst restore) needs a 7.1 SPOT in addition to the 7.2 one you built.genkex pre and post — anything that doesn't reload on 7.2 needs vendor remediation before you sign off.errpt entries. Expected on the first boot, not on the second.old_rootvg on hdisk0) needs to stay warm.