AIX 7.1 → 7.2 Migration · nimadm

NIMADM AIX 7.1 → 7.2
Runbook ID
RB-PATCH-AIX-002
Operation
Version migration · 7.1 → 7.2
Method
nimadm (NIM Alt-Disk Migration)
Source level
7100-05-12-2336
Target level
7200-05-10
NIM master
dr_nim_01
Client LPAR
lpar01
Working VG
nimadmvg · ≥ 2× rootvg
Spare disk
hdisk1 · ≥ rootvg
Window
3 — 4 hr
nimadm runtime
90 — 180 min
Cutover
single reboot
NIM Master
dr_nim_01
≥ 7.2 TL5 — must be at or above target
lpp_source · SPOT · nimadmvg working store
──▶
nimadm -c
Client LPAR
lpar01
7.1 TL5 SP12 → 7.2 TL5 SP10
rootvg cloned to hdisk1 · live system untouched

Worked example: All commands assume lpar01 as the client (already a defined NIM machine object on dr_nim_01), hdisk0 = live 7.1 rootvg, hdisk1 = unassigned spare, lpp_source lpp_aix72tl5sp10, SPOT spot_aix72tl5sp10, working VG nimadmvg, media path /export/nim/media/AIX_7.2_TL5_SP10/. Sub in your own values throughout.

Out of scope: new installations / "preservation" installs · PowerHA / HACMP-clustered nodes (cluster must be quiesced; migrate one node at a time per IBM's PowerHA migration procedure) · LPAR migrations (LPM is unrelated to nimadm). For 7.1 → 7.3 the procedure is identical — use a 7.3 NIM master and target a 7.3 lpp_source.

SP version drift: SP versions evolve. Confirm the latest 7.2 TL5 SP available on IBM Entitled Software Support (ESS) and target that — there is no benefit to migrating to an older SP and then patching forward.

Phase 1 Pre-change planning · T-7 to T-2 days

Third-party software compatibility matters more here than for an SP. Verify against AIX 7.2 TL5:

ComponentWhat to check
TSM / IBM Spectrum Protect clientPer IBM support matrix for AIX 7.2 TL5 SPx supported client versions. Older clients may need upgrading post-migration.
PowerHA / HACMPHard version dependency. Do not migrate clustered nodes ad-hoc.
Oracle, DB2, SAPEach has its own AIX 7.2 certification matrix.
Kernel extensionsgenkex shows the loaded list — anything 3rd-party must be re-validated.
Java, OpenSSL, OpenSSHShip at newer baselines on 7.2.
Monitoring / AV agentsITM, Nagios, Tanium, etc. — confirm with vendor.
Phase 2 NIM master preparation · T-7 to T-1 days

Skip-ahead: If lpp_source and SPOT already exist for another client at the same target level, jump to step 06.

01 Verify NIM master state and level CRITICAL dr_nim_01
oslevel -s # Must be >= 7200-05-10 (target client level) or higher. # A 7.1 NIM master CANNOT migrate clients to 7.2. lsnim -l master # Confirm master is initialized. lssrc -ls nimesis # NIM service must be active.

Hard rule: NIM master must be at or above the target client level. A 7.1 master cannot migrate a client to 7.2 — there's no "upgrade" path mid-flight. If the master is below target, upgrade the master first.

02 Stage the AIX 7.2 base media on the master CONFIG dr_nim_01

The 7.2 install media from ESS comes as one or more ISOs (typically AIX_7200-05-10_DVD_1_of_2.iso and _2_of_2.iso).

mkdir -p /export/nim/media/AIX_7.2_TL5_SP10 cd /export/nim/media/AIX_7.2_TL5_SP10 Transfer ISOs here (scp, sftp, or NFS from your media share) ls -l # Expect: # AIX_7200-05-10_DVD_1_of_2.iso # AIX_7200-05-10_DVD_2_of_2.iso Verify checksums against ESS download page csum -h SHA256 *.iso

✓ Verify: SHA256 from csum matches the value listed on the ESS download page for each ISO. A bad ISO becomes a corrupt lpp_source — catch it at the doorstep.

03 Define the lpp_source from both ISOs CRITICAL dr_nim_01

The lpp_source is the fileset repository — a directory of .bff files extracted from the install media. ISO 1 builds the initial lpp_source; ISO 2's contents are layered in via bffcreate.

Create the target directory for the lpp_source mkdir -p /export/nim/lpp_source/lpp_aix72tl5sp10 Define the lpp_source from the first ISO nim -o define -t lpp_source \ -a server=master \ -a location=/export/nim/lpp_source/lpp_aix72tl5sp10 \ -a source=/export/nim/media/AIX_7.2_TL5_SP10/AIX_7200-05-10_DVD_1_of_2.iso \ -a packages=all \ lpp_aix72tl5sp10 Add the contents of ISO 2 to the same lpp_source loopmount -i /export/nim/media/AIX_7.2_TL5_SP10/AIX_7200-05-10_DVD_2_of_2.iso \ -o "-V cdrfs -o ro" \ -m /mnt/iso2 bffcreate -d /mnt/iso2/installp/ppc \ -t /export/nim/lpp_source/lpp_aix72tl5sp10/installp/ppc \ all loopumount -i /export/nim/media/AIX_7.2_TL5_SP10/AIX_7200-05-10_DVD_2_of_2.iso \ -m /mnt/iso2 Rebuild the .toc so NIM sees the new filesets cd /export/nim/lpp_source/lpp_aix72tl5sp10/installp/ppc inutoc . Tell NIM to re-check the lpp_source nim -o check lpp_aix72tl5sp10 lsnim -l lpp_aix72tl5sp10 # Look for: simages = yes (means it contains a complete BOS image)

simages = yes is non-negotiable. Without it, nimadm refuses to use this lpp_source for migration. If the attribute is absent, the lpp_source is incomplete — confirm both ISOs were processed and that inutoc ran successfully against the merged tree.

04 Define the SPOT CONFIG dr_nim_01

The SPOT (Shared Product Object Tree) is a bootable mini-image used by NIM operations. Built from the lpp_source.

nim -o define -t spot \ -a server=master \ -a location=/export/nim/spot \ -a source=lpp_aix72tl5sp10 \ spot_aix72tl5sp10 SPOT creation takes 15 — 30 minutes — it builds a bootable image lsnim -l spot_aix72tl5sp10 nim -o check spot_aix72tl5sp10 # Expect no errors.

Tip: Keep the old 7.1 SPOT around until burn-in is complete — you'll need it for the worst-case bos_inst rollback (see RB.3).

05 Verify or create the working VG (nimadmvg) CONFIG dr_nim_01

nimadm needs scratch space on the NIM master to NFS-export and manipulate the client's cloned rootvg. Must be a separate VG (not rootvg) with at least 2× the client's rootvg size in free PPs.

lsvg # If 'nimadmvg' does not exist, create it on a dedicated disk: # mkvg -y nimadmvg -s 64 hdiskN lsvg nimadmvg # Note FREE PPs and PP SIZE; multiply for free GB.

Sizing: Free GB = FREE PPs × PP SIZE / 1024. Need ≥ 2× the client rootvg used capacity. Skimping here causes phase 7 failures partway through filest migration — and phase 7 is the longest phase.

06 Verify or define the client NIM machine object CONFIG dr_nim_01
lsnim -l lpar01 # If "0042-053 lsnim: there is no NIM object named 'lpar01'" # then define it: nim -o define -t standalone \ -a platform=chrp \ -a netboot_kernel=64 \ -a if1="find_net lpar01 0" \ -a cable_type1=N/A \ lpar01 Confirm the client is reachable nim -o lslpp lpar01 | head # If this returns lpp data, client communication is working.

✓ Verify: nim -o lslpp lpar01 returns data from the client. If it errors with 0042-006 or similar, fix bidirectional name resolution and NIM port reachability before continuing.

Phase 3 Pre-flight checks on the client LPAR · T-0

Capture everything: Wrap the whole pre-flight in script /tmp/preflight_$(date +%Y%m%d).log on the client.

07 Capture current OS level and identity BASELINE lpar01
oslevel -s # expect: 7100-05-12-2336 oslevel -r instfix -i | grep ML uname -a prtconf | head -30
08 Verify fileset health CRITICAL lpar01
lppchk -v # MUST return clean lppchk -c # checksum verify (optional)

Stop condition: If lppchk -v reports issues, remediate before migration. A migration that inherits a broken 7.1 fileset state will not produce a clean 7.2 — the breakage is carried into the new build, sometimes amplified.

09 Identify obsolete or problem filesets VALIDATE lpar01
lslpp -L | grep -i obsolete # Any filesets here will be removed by the migration -- # confirm nothing application-critical depends on them. Check for filesets known not to migrate cleanly lslpp -L bos.adt.libm bos.compat.libs perfagent.tools 2>/dev/null

Anything matched in the second command is worth confirming with the application team before migration. perfagent.tools in particular has historically caused post-migration noise.

10 Confirm disk space VALIDATE lpar01
df -g / /usr /var /opt /tmp /home
FilesystemMin freeWhy
/100 MBBootloader, ODM
/usr3 GB7.2 fileset footprint is materially larger than 7.1
/var1 GBnimadm and migration logs
/tmp1 GBinstallp scratch
/opt500 MBOptional product trees
11 Verify rootvg health and target disk CRITICAL lpar01
lsvg rootvg lsvg -p rootvg lsvg -l rootvg lspv # Identify the spare disk (e.g. hdisk1) -- must show "None" in # the VG column and be >= the size of hdisk0. bootinfo -s hdisk0 # size in MB bootinfo -s hdisk1 # must be >= hdisk0

✓ Verify: hdisk1 shows None in the VG column · bootinfo -s hdisk1 ≥ bootinfo -s hdisk0 · all LVs in lsvg -l rootvg are open/syncd.

12 Snapshot bootlist and error log BASELINE lpar01
bootlist -m normal -o > /tmp/bootlist_pre.txt bootlist -m service -o >> /tmp/bootlist_pre.txt errpt | head -30 errpt -a > /tmp/errpt_pre.txt
13 Snapshot running state for post-migration diff BASELINE lpar01
lssrc -a > /tmp/lssrc_pre.txt netstat -rn > /tmp/routing_pre.txt df -g > /tmp/df_pre.txt no -a > /tmp/no_pre.txt lsattr -El sys0 > /tmp/sys0_pre.txt genkex > /tmp/genkex_pre.txt # loaded kernel extensions

Diff baseline post-migration. Kernel extensions matter especially — anything 3rd-party that doesn't reload on 7.2 is the migration's most common silent-failure mode.

Phase 4 Backups
14 Take a NIM-resident mksysb (preferred) CRITICAL dr_nim_01

A NIM-resident mksysb gives you the option to recover the LPAR via nim -o bos_inst if both the live rootvg and the migrated clone become unbootable.

nim -o define -t mksysb \ -a server=master \ -a location=/export/nim/mksysb/lpar01_pre72_$(date +%Y%m%d).mksysb \ -a source=lpar01 \ -a mk_image=yes \ lpar01_pre72_$(date +%Y%m%d)

Pull, not push: This pulls a mksysb from the client to the master in one operation. Runtime depends on rootvg size and network speed (allow 30 — 90 min).

15 Take a local mksysb (fallback) FALLBACK lpar01

If a NIM-resident mksysb is impractical, take a local mksysb to NFS.

df -g /backup mksysb -ipX /backup/$(hostname)_pre72_$(date +%Y%m%d_%H%M).mksysb
16 Verify the spare disk is ready for nimadm VALIDATE lpar01
lspv | grep hdisk1 # Expect: hdisk1 00fXXXXXXXXXXXXX None

Stale VG metadata: If hdisk1 shows old_rootvg, altinst_rootvg, or another stale VG from a previous operation, clean it first:

Only if necessary — destructive to whatever is on hdisk1 alt_rootvg_op -X old_rootvg # for old_rootvg # OR chpv -C hdisk1 # clears VGDA, USE WITH CARE

Do not pre-clone with alt_disk_copy. nimadm creates its own clone of rootvg on hdisk1 — it is the clone operation. Taking alt_disk_copy on top would just consume another disk for no benefit.

Phase 5 Run the migration
17 Final pre-execution checks on the NIM master VALIDATE dr_nim_01
Confirm all resources are ready lsnim -l lpp_aix72tl5sp10 | grep simages lsnim -l spot_aix72tl5sp10 lsnim -l lpar01 lsvg nimadmvg | grep "FREE PPs"

✓ Acceptance: lpp_source has simages = yes · SPOT exists with no errors · client object resolves · nimadmvg has ≥ 2× client rootvg in free PPs.

18 Optional: nimadm phase 1 dry run PREVIEW dr_nim_01

nimadm runs in 12 phases (see step 20). You can preview without committing — phase 1 is the validation phase, it checks all prerequisites without modifying anything on the client. Useful for catching configuration errors before the real run.

nimadm -c lpar01 \ -s spot_aix72tl5sp10 \ -l lpp_aix72tl5sp10 \ -j nimadmvg \ -d hdisk1 \ -P 1 # run only phase 1

-P N: Runs only phase N. Cheap to do, catches misconfigured master/client/lpp_source before you sink hours into a real run.

19 Execute the full migration CRITICAL dr_nim_01
In a captured session script /var/log/nim/nimadm_lpar01_$(date +%Y%m%d_%H%M).log nimadm -c lpar01 \ -s spot_aix72tl5sp10 \ -l lpp_aix72tl5sp10 \ -j nimadmvg \ -d hdisk1 \ -Y Exit the script session when complete exit
FlagPurpose
-cClient (NIM machine object name)
-sSPOT
-llpp_source
-jVG on NIM master for working storage
-dDestination disk on the client
-YAccept software licenses non-interactively

Total runtime: 90 — 180 min depending on rootvg size, network speed, and NIM master CPU.

20 nimadm phases — what to expect in the log REFERENCE dr_nim_01
Phase 1
Initialization · prereq check
Phase 2
Client alt_disk_install setup
Phase 3
Clone rootvg hdisk0 → hdisk1 on client
Phase 4
Export client's hdisk1 alt rootvg via NFS to master
Phase 5
Mount alt rootvg on master, configure for migration
Phase 6
Run pre-migration scripts
Phase 7 · longest
Migrate filesets · new BOS installed here
Phase 8
Run post-migration scripts
Phase 9
Bosboot the alt rootvg
Phase 10
Unmount and clean up NFS export
Phase 11
Wake up alt_disk on client · re-import VG metadata
Phase 12
Set client bootlist to alt disk · finalize

Common failure modes: insufficient space (phase 7), broken filesets inherited from the client (phase 1 or 7), NFS export problems (phase 4 — check firewalls and /etc/exports on the master). The error at the top of the failure usually states which phase to restart from with -r or -P N.

21 Monitor progress from the client side MONITOR lpar01

In a separate session on the client, watch for the alt rootvg appearing and the NFS mount lifecycle.

Watch the alt rootvg appear on hdisk1 while true; do lspv | grep hdisk1; sleep 30; done When phase 4 begins, hdisk1 is NFS-mounted FROM the master df -g | grep nfs When phase 12 completes, bootlist will have been updated bootlist -m normal -o # Expect: hdisk1
22 Confirm completion VALIDATE dr_nim_01

The nimadm log ends with:

Bootlist is set to boot from disk hdisk1. nimadm: Migration completed successfully.

State at this point: the client is still running 7.1 from hdisk0, but is configured to boot 7.2 from hdisk1 on next reboot. The cutover is the reboot in Phase 6.

Phase 6 Cutover and verification
23 Final pre-reboot checks on the client VALIDATE lpar01
Confirm bootlist points at the migrated disk bootlist -m normal -o # Expect: hdisk1 Confirm services are still up (we haven't rebooted yet) lssrc -a | grep -i inoperative
24 Application shutdown CAUTION lpar01

Follow your application shutdown runbook. Database engines, JVMs, and middleware should be stopped cleanly before reboot — orphan transactions on a 7.1 socket that comes up under 7.2 are exactly the kind of subtle bug that takes weeks to chase.

25 Reboot onto the migrated rootvg REBOOT lpar01
sync; sync; sync shutdown -Fr now

Have an HMC console open and visible during the reboot — first boot off a migrated rootvg is the moment things go sideways and you want to see LED codes immediately, not via the SSH timeout.

26 First boot considerations EXPECTED lpar01

The first boot off the migrated 7.2 rootvg may:

  • ▸ Take longer than usual while the kernel re-syncs ODM and rebuilds device configuration.
  • ▸ Drop to a console TERM prompt before login — enter vt100 or xterm and continue. Subsequent boots will not prompt.
  • ▸ Generate informational entries in errpt during initial config — review but expect some noise.

Don't panic: Any of the above on the first boot is expected. Repeated occurrences on the second boot are not.

27 Post-reboot verification VALIDATE lpar01
oslevel -s # Expect: 7200-05-10-XXXX (your chosen target SP) oslevel -r # Expect: 7200-05 instfix -i | grep ML # Expect: All filesets for 7200-05_AIX_ML were found. lppchk -v # MUST return clean errpt | head -30 # Compare against /tmp/errpt_pre.txt df -g # /usr will have grown notably diff /tmp/df_pre.txt <(df -g) | head lssrc -a | grep -i inoperative # Anything that should be active but isn't? bootlist -m normal -o # Expect: hdisk1 (the migrated disk is now your live rootvg) lspv # hdisk0 will show as 'old_rootvg' -- this is your rollback Check kernel extensions reloaded diff /tmp/genkex_pre.txt <(genkex) | head

✓ Acceptance: oslevel at target · lppchk clean · ML found · no inoperative subsystems · bootlist on hdisk1 · hdisk0 shows as old_rootvg · kernel extensions reloaded with no surprising drops.

28 Application validation handoff HANDOFF lpar01

Hand off to application owners for service validation. Treat the change as in-progress until they sign off. Keep the rollback warm — don't run any of the cleanup steps in Phase 8 yet.

Phase 7 Rollback procedure

Trigger conditions: client fails to boot off the migrated rootvg · lppchk -v reports broken filesets post-migration · new error classes in errpt linked to the migration · application owners report a regression that cannot be remediated in the maintenance window.

RB.1 Rollback while booted on the migrated 7.2 rootvg ROLLBACK lpar01
Repoint bootlist to the original 7.1 disk (now 'old_rootvg') bootlist -m normal hdisk0 bootlist -m normal -o # verify shutdown -Fr now After reboot oslevel -s # expect: 7100-05-12-2336 lspv # rootvg is back on hdisk0; the migrated 7.2 will show as # altinst_rootvg or similar on hdisk1
RB.2 Rollback if the migrated system will not boot ROLLBACK HMC
  1. Open a vterm to the LPAR.
  2. Power off the LPAR (Operations → Shut Down → Immediate).
  3. Activate the LPAR with profile, in SMS mode.
  4. SMS Main Menu → "5. Select Boot Options" → "1. Select Install/Boot Device" → "5. Hard Drive" → select hdisk0 (the original disk) → "2. Normal Mode Boot".
  5. System boots from hdisk0 (original 7.1 rootvg).

Then follow the RB.1 verification to confirm oslevel -s reports the pre-migration level and the bootlist sticks.

RB.3 Worst case: restore from mksysb via NIM DISASTER dr_nim_01

If neither disk boots:

nim -o bos_inst \ -a source=mksysb \ -a mksysb=lpar01_pre72_<date> \ -a spot=spot_aix71tl5sp12 \ -a no_client_boot=no \ lpar01

This requires a 7.1 SPOT on the NIM master in addition to the 7.2 one created in step 04 — keep the old 7.1 SPOT around until burn-in is complete.

RB.4 Post-rollback cleanup CLEANUP lpar01
Once stable on the rolled-back rootvg alt_rootvg_op -X altinst_rootvg # Removes the failed 7.2 alt_rootvg

Preserve for IBM support:

  • /var/log/nim/nimadm_lpar01_*.log on the master
  • /var/adm/ras/nim.installp on the client
  • ▸ Any errpt -a output from the failed boot attempts
Phase 8 Cleanup · T+5 to T+14 days, after burn-in

Only after the migrated system has run cleanly and application owners have signed off.

29 Client cleanup — free hdisk0 and archive logs CLEANUP lpar01
Remove the old 7.1 rootvg clone, freeing hdisk0 alt_rootvg_op -X old_rootvg lspv # hdisk0 should now show "None" in VG column. Optional: extend rootvg onto hdisk0 for mirroring or growth # extendvg rootvg hdisk0 # mirrorvg rootvg hdisk0 # bosboot -ad /dev/hdisk0 # bootlist -m normal hdisk1 hdisk0 Archive change logs for audit tar -cvf /backup/change_$(hostname)_$(date +%Y%m%d).tar \ /tmp/preflight_*.log \ /tmp/errpt_pre.txt /tmp/lssrc_pre.txt /tmp/df_pre.txt \ /tmp/no_pre.txt /tmp/sys0_pre.txt /tmp/genkex_pre.txt

Mirroring is the high-value cleanup option: after burn-in, mirror the new 7.2 rootvg onto hdisk0 so you have redundancy on both disks again. Skip if your storage layer already provides redundancy.

30 NIM master cleanup CLEANUP dr_nim_01
Remove the pre-migration mksysb (after retention period) nim -o remove lpar01_pre72_<date> rm /export/nim/mksysb/lpar01_pre72_<date>.mksysb Keep lpp_source / SPOT if more clients are pending migration Otherwise: # nim -o remove spot_aix72tl5sp10 # nim -o remove lpp_aix72tl5sp10 # rm -rf /export/nim/lpp_source/lpp_aix72tl5sp10 # rm -rf /export/nim/spot/spot_aix72tl5sp10 Working storage # nimadmvg is cleaned automatically by nimadm on success

Update the change ticket with completion timestamp and close. You earned it.

Appendix A NIM resource quick reference
A.1 NIM commands cheatsheet REFERENCE dr_nim_01
lsnim # List all NIM objects lsnim -l <object> # Show all attributes of an object lsnim -t lpp_source # List objects of type lpp_source lsnim -t spot # List SPOTs lsnim -t standalone # List client (standalone) machines nim -o check <resource> # Re-validate a resource nim -o remove <object> # Delete a NIM object nim -o reset <client> # Clear stuck NIM state on a client nim -o deallocate -a subclass=all <client> # Free all resources allocated to client nimadm -c <client> -P 1 ... # Run only phase 1 (validation) nimadm -c <client> ... # Full migration nimadm -B -c <client> # "BOS only" mode -- skip non-BOS filesets
A.2 nimadm phase failure quick reference REFERENCE dr_nim_01
PhaseCommon failureRemediation
1Client unreachableCheck network, ping, rpcinfo
1lpp_source missing simages=yesRe-add ISO 2 contents, inutoc, nim -o check
1Client lppchk -v not cleanFix broken filesets on 7.1 first
3Spare disk not free / wrong sizeConfirm hdisk1 has VG=None and size ≥ hdisk0
4NFS mount failsFirewall between client and master, /etc/exports
7"Not enough space in /usr" (alt rootvg)Increase rootvg size or use larger target disk
7Fileset prereq missing in lpp_sourceAdd missing fileset to lpp_source, inutoc, restart
9bosboot failsUsually disk error; check errpt on client
11Cannot wake alt_disk_installManual alt_rootvg_op -W -d hdisk1 on client

Restart options: If nimadm fails mid-run, it can usually be restarted from a specific phase using -r (resume) or -P N (run from phase N). Read the error message at the top of the failure for the suggested restart.

A.3 Worked example — single command sequence REFERENCE dr_nim_01

Do not follow this blindly — values (level numbers, hostnames, hdisk names, dates) will change. This is a sequence reference, not a copy-paste recipe.

On NIM master · one-time setup mkdir -p /export/nim/media/AIX_7.2_TL5_SP10 # ... transfer ISOs here ... nim -o define -t lpp_source \ -a server=master \ -a location=/export/nim/lpp_source/lpp_aix72tl5sp10 \ -a source=/export/nim/media/AIX_7.2_TL5_SP10/AIX_7200-05-10_DVD_1_of_2.iso \ -a packages=all \ lpp_aix72tl5sp10 # ... add ISO 2 contents via bffcreate, then inutoc and nim -o check ... nim -o define -t spot \ -a server=master \ -a location=/export/nim/spot \ -a source=lpp_aix72tl5sp10 \ spot_aix72tl5sp10 On client · pre-flight oslevel -s lppchk -v lspv bootlist -m normal -o > /tmp/bootlist_pre.txt errpt -a > /tmp/errpt_pre.txt On NIM master · take pre-migration mksysb nim -o define -t mksysb \ -a server=master \ -a location=/export/nim/mksysb/lpar01_pre72_$(date +%Y%m%d).mksysb \ -a source=lpar01 \ -a mk_image=yes \ lpar01_pre72_$(date +%Y%m%d) On NIM master · run migration script /var/log/nim/nimadm_lpar01_$(date +%Y%m%d_%H%M).log nimadm -c lpar01 -s spot_aix72tl5sp10 -l lpp_aix72tl5sp10 \ -j nimadmvg -d hdisk1 -Y exit On client · reboot and verify shutdown -Fr now # ... wait for reboot ... oslevel -s # expect 7200-05-10-XXXX lppchk -v instfix -i | grep ML errpt | head On client · cleanup after burn-in alt_rootvg_op -X old_rootvg
Quick Reference Command Summary
CommandPurposeNotes
oslevel -s (master)Confirm NIM master at ≥ target levelHard gate — 7.1 master can't migrate to 7.2
nim -o define -t lpp_sourceRegister the AIX 7.2 fileset repositoryPhase 2 step 03 — needs simages=yes
bffcreate -d ... -t ...Layer ISO 2 filesets into the lpp_sourceAfter ISO 1 has been defined
inutoc <dir>Build .toc for the lpp_sourceRe-run after every directory change
nim -o check <resource>Re-validate an lpp_source or SPOTRequired after inutoc
nim -o define -t spotBuild the bootable SPOT mini-imagePhase 2 step 04 — 15-30 min
nim -o define -t mksysb -a mk_image=yesPull mksysb from client to masterPhase 4 step 14 — preferred backup
nimadm -c <client> -P 1 ...Phase-1-only validation dry runCheap insurance before the real run
nimadm -c ... -s ... -l ... -j ... -d ... -YRun the full migrationPhase 5 step 19 — 90-180 min
nimadm -B ..."BOS only" mode — skip non-BOS filesetsFaster but skips third-party filesets
bootlist -m normal -oConfirm bootlist after migrationShould show hdisk1 post-Phase 12
oslevel -s (client, post-reboot)Confirm migration succeededExpect 7200-05-10-XXXX
lppchk -vVerify fileset consistencyMUST be clean pre and post
diff /tmp/genkex_pre.txt <(genkex)Confirm kernel extensions reloadedPhase 6 step 27 — silent failure mode
bootlist -m normal hdisk0Roll back to original 7.1 rootvgRB.1 — old_rootvg becomes live again
nim -o bos_inst -a source=mksysb ...Worst-case mksysb restoreRB.3 — needs 7.1 SPOT preserved
alt_rootvg_op -X old_rootvgFree hdisk0 after burn-inPhase 8 step 29 — only after sign-off
nim -o remove <object>Delete NIM objects post-cleanupPhase 8 step 30
nim -o reset <client>Clear stuck NIM state on a clientUse if a failed nimadm leaves the client wedged
⚠ Key Notes