AIX LVM Basics — VG / LV / Filesystem Provisioning

PROVISIONING datavg
VG Name
datavg
VG Type
Scalable (-S)
PP Size
256 MB
Disk
hdiskX (~1 TB)
LVs
test1_lv, test2_lv, test3_lv
LV Size
1000 PPs (~250 GB ea)
Filesystems
/test1, /test2, /test3
FS Type
JFS2 / INLINE log
Phase 1 Identify & Verify the Physical Volume
01 Identify candidate disk and confirm no VG association DISCOVER
# List all known disks and their VG associations lspv # Confirm hdiskX is present and recognised by the kernel lsdev -Cc disk # Inspect attributes of the candidate disk lsattr -El hdiskX

Expected state: the candidate disk should appear in lspv with None in the VG column. If it shows an existing VG name, stop — the disk is in use. If it has a stale PVID but no VG (e.g. recycled from a removed system), continue to Step 03.

02 Verify disk size and check for residual VGDA VERIFY
# Disk size in MB — confirm ~1,048,576 (1 TB) or as expected bootinfo -s hdiskX # Inspect first sectors for a residual VGDA / LVM signature # Output of all zeros = clean disk; non-zero = stale LVM metadata lquerypv -h /dev/hdiskX 80 10 # Optional — confirm the disk is not part of a concurrent or imported VG lspv hdiskX 2>/dev/null lqueryvg -p hdiskX -At 2>/dev/null

Sizing math: with PP size 256 MB on a ~1 TB disk you get approximately 3,996 usable PPs (some PPs reserved for VGDA/VGSA metadata). See the concept primer below for a full breakdown of how PP, LP, and FS sizes relate.

03 Clear stale PVID if disk is recycled (conditional) CONDITIONAL
# Only run if Step 01 showed a stale PVID with no VG, or Step 02 # lquerypv output had non-zero LVM signature # Clear the existing PVID chdev -l hdiskX -a pv=clear # Re-assign a fresh PVID chdev -l hdiskX -a pv=yes # Confirm new PVID is set and disk is unassigned lspv | grep hdiskX

Skip this step on a brand-new disk. mkvg will assign a fresh PVID automatically when the VG is created. Only clear the PVID when you are certain the disk previously belonged to a system that is no longer running and the data is genuinely abandoned — there is no recovery path after pv=clear.

Concept Primer Physical Partitions, Logical Partitions & Filesystem Sizing

AIX LVM presents three layers of abstraction between the raw disk and the filesystem. Understanding how they relate is essential to sizing LVs correctly and reading the output of lsvg / lslv.

TermDescription
PV Physical Volume — a disk presented to AIX (hdiskX). Belongs to exactly one VG when in use.
PP Physical Partition — fixed-size chunk a PV is divided into when it joins a VG. Set by mkvg -s in MB. Cannot be changed after VG creation. All PVs in a VG share the same PP size.
LP Logical Partition — the unit an LV is built from. An LP is a logical reference that maps to one or more PPs depending on mirroring.
LV Logical Volume — a collection of LPs that presents itself as a block device under /dev/<lvname>. The filesystem (or raw consumer) sits on top of this.

The LP-to-PP mapping is where the mirroring story lives. In an unmirrored VG, each LP maps to one PP — they are 1:1. In a 2-copy mirrored VG, each LP maps to two PPs (one per copy). In a 3-copy mirror, each LP maps to three PPs. This is why mirroring "doubles" or "triples" the PP cost of an LV without changing its usable size.

Unmirrored: LV size = LP count × PP size (1 LP = 1 PP) 2-copy: LV size = LP count × PP size (1 LP = 2 PPs — doubles VG consumption) 3-copy: LV size = LP count × PP size (1 LP = 3 PPs — triples VG consumption) Worked example — this runbook: 1000 LPs × 256 MB PP size = 256,000 MB = 250 GB Three LVs at 1000 LPs (unmirrored) = 3000 PPs allocated ~3996 PPs available on 1 TB disk = ~996 PPs free (~249 GB)

Filesystem size vs LV size. A JFS2 filesystem on a 250 GB LV will not show 250 GB free in df -g. A small portion is consumed by the JFS2 superblock and (with INLINE log) the transaction log. Expect a few hundred MB of overhead per filesystem — usually negligible at this scale, but worth knowing about when you size LVs near the edge of available VG capacity.

Reading lsvg output: TOTAL PPs is the count after VGDA/VGSA reservation; USED PPs is what your LVs have allocated; FREE PPs is what's left for new LVs or expansion. The (megabytes) figures next to each are simply count × PP size — multiply yourself if you have a non-MB PP size in mind.

Phase 2 Create the Volume Group
04 Create datavg as a Scalable VG with 256 MB PP size CREATE
Create the VG # -S : Scalable VG (modern default; up to 1024 PVs / 256 LVs / 32,768 PPs) # -s 256 : Physical Partition size in MB # -f : Force creation even if disk looks like it belonged to another VG # -y : VG name mkvg -S -s 256 -f -y datavg hdiskX Confirm the VG was created and is varied on # Should list datavg and show it as active lsvg # VG attributes — TOTAL PPs, FREE PPs, PP SIZE, MAX LVs, etc. lsvg datavg # Physical-disk membership of the VG lsvg -p datavg # Confirm hdiskX now shows datavg in lspv output lspv | grep hdiskX

Why Scalable (-S): Scalable VGs supersede the older Original (default) and Big (-B) formats. They support far higher PV/LV/PP counts out of the box and avoid the need for later chvg -B / chvg -G conversions if the VG grows. For any new VG on AIX 6.1 or later, Scalable is the right default unless there is a specific compatibility reason otherwise.

PP size tradeoff: 256 MB is a balanced choice for a 1 TB disk. Smaller PPs (e.g. 32 MB) give finer LV sizing granularity but exhaust the per-PV PP limit faster on large disks. Larger PPs (e.g. 1024 MB) waste space on small LVs. Once set at mkvg time, PP size cannot be changed — the VG must be recreated.

Phase 3 Create the Logical Volumes
05 Create test1_lv, test2_lv, test3_lv — 1000 PPs each CREATE
Create the three LVs # -t jfs2 : LV type — pairs with the JFS2 filesystem in Phase 4 # -y NAME : LV name # datavg : target VG # 1000 : number of LPs (each 256 MB) -> 250 GB mklv -t jfs2 -y test1_lv datavg 1000 mklv -t jfs2 -y test2_lv datavg 1000 mklv -t jfs2 -y test3_lv datavg 1000

See the concept primer above for the LP-to-PP relationship. In this unmirrored VG, 1000 LPs at 256 MB = 250 GB and consumes exactly 1000 PPs from the VG's free pool. If this VG were 2-copy mirrored, the same mklv ... 1000 would consume 2000 PPs.

06 Verify LV creation and VG consumption VERIFY
# All LVs in datavg — should list all three at 1000 LPs / 1000 PPs lsvg -l datavg # Per-LV detail — type, LP/PP counts, copies, mirror policy lslv test1_lv lslv test2_lv lslv test3_lv # Confirm VG free PP count has dropped by ~3000 lsvg datavg | grep -E 'TOTAL PPs|FREE PPs|USED PPs'

Expected post-create state: TOTAL PPs ≈ 3996, USED PPs = 3000, FREE PPs ≈ 996. The remaining ~249 GB of free space is what makes the expansion in Phase 6 possible without adding more disks.

Phase 4 Create the JFS2 Filesystems
07 crfs — JFS2 with INLINE log on each LV CREATE
Create the three filesystems # -v jfs2 : filesystem type # -d LV : backing logical volume # -m /mountpoint : mountpoint (created automatically if missing) # -A yes : auto-mount at boot (entry added to /etc/filesystems) # -a logname=INLINE : inline log inside the LV (no separate jfs2log LV) crfs -v jfs2 -d test1_lv -m /test1 -A yes -a logname=INLINE crfs -v jfs2 -d test2_lv -m /test2 -A yes -a logname=INLINE crfs -v jfs2 -d test3_lv -m /test3 -A yes -a logname=INLINE Verify /etc/filesystems entries were created lsfs /test1 /test2 /test3 # Or inspect the stanza directly grep -A6 '^/test1:' /etc/filesystems

INLINE log vs separate jfs2log: INLINE places the JFS2 transaction log inside the data LV — simpler to manage, no separate logform step, and the log grows with the filesystem on chfs. A dedicated jfs2log LV (shared across multiple filesystems) can offer marginal performance gains on heavy-write workloads but adds a separately-mirrored single point of failure to plan around. INLINE is the modern default for general-purpose filesystems.

Phase 5 Mount and Verify
08 Mount the filesystems and confirm available space MOUNT
Mount each filesystem mount /test1 mount /test2 mount /test3 Verify mounted state and free space # All three should show mounted with ~250 GB available # (slightly less than 250 GB due to JFS2 superblock + INLINE log overhead) df -g /test1 /test2 /test3 # Confirm mount table mount | grep -E '/test[123]' # Confirm permissions are crfs default — root:system 755 ls -ld /test1 /test2 /test3 Reboot-persistence sanity check (no reboot required) # Validate that /etc/filesystems entries are syntactically correct # and would auto-mount cleanly. lsfs reads /etc/filesystems directly. lsfs -q /test1 /test2 /test3

If mount fails: common causes are (1) the mountpoint already contains data — ls -la /testN before mounting; (2) /etc/filesystems stanza was malformed — check with lsfs; (3) the LV is being held by another process. Do not reboot to clear a mount issue — fix the stanza in /etc/filesystems first or the host may fail to come up cleanly.

Phase 6 Bonus — Filesystem Expansion Reference
09 chfs — grow an LV and its filesystem in one command REFERENCE
Confirm there is free space in the VG before growing lsvg datavg | grep -E 'FREE PPs|FREE PPs (megabytes)' Grow /test1 by +10 GB (LV + FS in one step) # size=+10G expands the filesystem and underlying LV by 10 GB. # size=10G (no +) would set absolute size to 10 GB — which would shrink it. # Always use the + prefix when growing. chfs -a size=+10G /test1 Verify new size df -g /test1 lslv test1_lv | grep -E 'LPs|PPs' Alternative — grow LV only (manual, two-step) # Use this if you want to extend the LV without immediately expanding the FS extendlv test1_lv 40 # add 40 LPs (~10 GB) chfs -a size=+10G /test1 # grow FS to match Shrinking JFS2 (online) # JFS2 supports online shrink — JFS (legacy) does not. # size=-5G reduces the filesystem by 5 GB. The underlying LV is NOT # reduced automatically — use rmlvcopy/migratepv if you need PPs back. chfs -a size=-5G /test1

Watch the + sign. chfs -a size=10G sets absolute size to 10 GB. On a 250 GB filesystem, this is a destructive shrink operation and chfs will refuse only if data would be lost — but on a near-empty filesystem it will proceed silently. Always use size=+N when growing and size=-N when shrinking. Never use bare size=N without thinking about it.

Why chfs -a size=+N "just works": JFS2 with INLINE log makes filesystem expansion a single atomic operation — chfs calls extendlv on the underlying LV, then resizes the filesystem and INLINE log in place. With a separate jfs2log LV, only the data LV grows; the log remains its original size and may need separate management.

Phase 7 VG State Management — varyoffvg / varyonvg
10 Take a VG offline and bring it back online STATE
When to varyoff a VG # - Maintenance work on the underlying disk(s) # - Before unmapping shared / SAN storage # - Required step before exportvg (Phase 8) # - Releasing reserves on shared storage in HA scenarios Pre-flight — all filesystems must be unmounted # varyoffvg will refuse if any LV in the VG has an active mount or is open. # Identify all mounted filesystems on the VG: lsvgfs datavg # Unmount each one umount /test1 umount /test2 umount /test3 # Confirm nothing is still holding an LV open fuser -c /test1 /test2 /test3 # expect no output Take the VG offline varyoffvg datavg # Confirm VG is no longer active lsvg -o # lists only varied-on VGs lsvg # lists all known VGs (including offline) Bring the VG back online varyonvg datavg # Confirm active state lsvg -o | grep datavg Re-mount the filesystems mount /test1 mount /test2 mount /test3 df -g /test1 /test2 /test3

varyoffvg is fail-safe by default. If any LV is open or any filesystem is mounted, varyoffvg will refuse and report the offending LV. Don't be tempted to use forceful workarounds — fix the open handle (umount the FS, kill the holding process) and retry cleanly.

varyonvg -f exists but is for recovery scenarios — quorum loss, partial VGDA availability after disk failure. Don't use it routinely; if a normal varyonvg fails, investigate why before reaching for -f.

Phase 8 VG Portability — exportvg / importvg
11 Move a VG between hosts (e.g. SAN reassignment, LPAR migration) PORTABILITY
When to export / import a VG # - Migrating data disks between LPARs / hosts # - Reassigning SAN LUNs to a different host # - Recovering data from a disk lifted out of a decommissioned system # # exportvg removes the VG definition from the SOURCE host's ODM. # It does NOT touch any data on disk. The VGDA on the disk itself # remains intact and is what importvg reads on the target host. ════ ON SOURCE HOST ════ # 1. Unmount all filesystems in the VG (same as Phase 7) umount /test1 /test2 /test3 # 2. Vary off the VG varyoffvg datavg # 3. Capture VG metadata for reference (filesystem stanzas, LV layout) # Useful if anything goes wrong on the target side. lsvg -l datavg > /tmp/datavg-lv-layout.txt grep -A6 '^/test' /etc/filesystems > /tmp/datavg-fs-stanzas.txt # 4. Export the VG (removes ODM entries on source only) exportvg datavg # 5. Confirm VG is gone from this host's view lsvg # datavg should NOT appear lspv | grep hdiskX # PVID present, VG column = "None" # 6. (If SAN-attached) — unmap LUN from this host, remap to target # (If physical) — physically move the disk rmdev -dl hdiskX # cleanly remove device after unmap ════ ON TARGET HOST ════ # 1. Discover the new disk cfgmgr -v # 2. Identify the disk by its PVID (NOT by hdisk number — numbering # differs between hosts). Source-host hdiskX may be hdiskY here. lspv # find the PVID matching the source-host record # 3. Import the VG — importvg reads the VGDA from disk and recreates # the ODM entries plus /etc/filesystems stanzas for any FSes. # -y NAME : VG name on this host (usually keep the same) importvg -y datavg hdiskY # 4. Confirm import succeeded lsvg | grep datavg # datavg listed lsvg -l datavg # LVs and FSes recreated grep -A6 '^/test' /etc/filesystems # 5. importvg auto-varies the VG on by default. Confirm: lsvg -o | grep datavg # 6. Mount the filesystems mount /test1 mount /test2 mount /test3 df -g

importvg uses PVID, not hdisk number. The source-host hdiskX may show up as hdiskY (or any other number) on the target. Always verify by PVID, not device name. importvg only needs one PV from a multi-PV VG — it will find the others automatically via the VGDA.

Mountpoint collisions on import: if /test1 already exists on the target host with content, the mount will overlay it (hiding the existing files until unmount). If the target has its own filesystem at the same path, you'll need to plan a rename — either alter the source-host stanzas before export, or edit /etc/filesystems on the target after import and before mounting.

exportvg does NOT delete data. It removes the ODM entries only. The VGDA on disk is untouched. This means: (1) you can re-import on the same host with importvg if export was done in error; (2) never use exportvg as a "delete everything" command — use reducevg + chpv -C + disk wipe for that.

Quick Reference Command Summary
CommandPurposeNotes
lspvList PVs and VG associationsCandidate disk should show "None"
bootinfo -s hdiskXDisk size in MBSanity-check size before mkvg
lquerypv -h /dev/hdiskX 80 10Inspect for residual VGDAAll zeros = clean
chdev -l hdiskX -a pv=clearClear stale PVIDRecycled disks only — destructive
mkvg -S -s 256 -f -y datavg hdiskXCreate Scalable VG, 256 MB PPsPP size cannot be changed later
lsvg datavg / lsvg -p datavgVG attributes / PV membershipConfirm TOTAL/FREE PPs
mklv -t jfs2 -y NAME datavg 1000Create 250 GB JFS2 LV1000 LPs × 256 MB
lsvg -l datavgList all LVs in VGVerify post-create state
crfs -v jfs2 -d LV -m /mp -A yes -a logname=INLINECreate JFS2 FS with inline logAuto-mount stanza added to /etc/filesystems
lsfs /mpInspect FS stanzaConfirms /etc/filesystems is correct
mount /mpMount filesystemReads /etc/filesystems
df -g /mpVerify free space in GBSlight overhead vs raw LP size
chfs -a size=+10G /mpGrow LV + FS togetherAlways use + prefix
extendlv NAME 40Grow LV only (40 LPs)Pair with chfs to grow FS
lsvgfs datavgList filesystems on a VGUseful pre-varyoff check
varyoffvg datavgTake VG offlineAll FSes must be unmounted first
varyonvg datavgBring VG online-f for recovery only
lsvg -oList only varied-on VGsQuick state check
exportvg datavgRemove VG ODM entries (source)Does NOT delete data on disk
importvg -y datavg hdiskYImport VG on target hostAuto-varies on; uses PVID, not hdisk#
⚠ Key Notes