N-Docs LogoN-Docs

Storage Management

Configure and manage storage in Proxmox VE for optimal performance and reliability

Storage Management

Proxmox VE supports various storage types and configurations. Proper storage management is essential for performance, reliability, and data protection.

Storage Types Overview

Choose storage types based on your performance requirements, budget, and use case scenarios.

Local Storage Types

  • Directory: Simple file-based storage on local filesystem
  • LVM: Logical Volume Manager for block devices
  • LVM-Thin: Thin provisioning with LVM
  • ZFS: Advanced filesystem with built-in features

Network Storage Types

  • NFS: Network File System for shared storage
  • CIFS/SMB: Windows-compatible network shares
  • iSCSI: Block-level network storage
  • Ceph: Distributed storage cluster

Local Storage Configuration

Directory Storage

  1. DatacenterStorageAddDirectory
  2. Configure settings:
    • ID: local-backup
    • Directory: /mnt/backup
    • Content: Select appropriate types
    • Nodes: Select target nodes

Edit /etc/pve/storage.cfg:

dir: local-backup
    path /mnt/backup
    content backup,iso,vztmpl
    nodes proxmox-node1
    prune-backups keep-last=3

LVM Configuration

Create LVM storage for high-performance VM disks:

# Create physical volume
pvcreate /dev/sdb

# Create volume group
vgcreate vm-storage /dev/sdb

# Add to Proxmox storage configuration
pvesm add lvm vm-storage --vgname vm-storage --content images

ZFS Configuration

ZFS requires adequate RAM (minimum 8GB recommended) and benefits from SSD caching for optimal performance.

# Create ZFS pool
zpool create -f vm-pool /dev/sdb

# Add to Proxmox
pvesm add zfspool vm-pool --pool vm-pool --content images,rootdir
# Create mirrored ZFS pool
zpool create -f vm-pool mirror /dev/sdb /dev/sdc

# Configure compression and deduplication
zfs set compression=lz4 vm-pool
zfs set dedup=on vm-pool
# Create RAIDZ pool (minimum 3 disks)
zpool create -f vm-pool raidz /dev/sdb /dev/sdc /dev/sdd

# Optimize for VM workloads
zfs set recordsize=64K vm-pool
zfs set primarycache=metadata vm-pool

Network Storage Configuration

NFS Storage

On NFS server:

# Install NFS server
apt update && apt install nfs-kernel-server

# Create export directory
mkdir -p /srv/nfs/proxmox

# Configure exports
echo '/srv/nfs/proxmox 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)' >> /etc/exports

# Apply configuration
exportfs -ra
systemctl restart nfs-kernel-server
# Add NFS storage to Proxmox
pvesm add nfs nfs-storage --server 192.168.1.200 --export /srv/nfs/proxmox --content backup,iso,vztmpl,images

Or via web interface:

  1. DatacenterStorageAddNFS
  2. Configure NFS server details

Ceph Configuration

Ceph provides distributed storage with high availability and scalability, ideal for cluster environments.

Install Ceph

# Install Ceph packages
pveceph install --version quincy

# Initialize Ceph cluster
pveceph init --network 192.168.1.0/24

# Create monitors (on each node)
pveceph mon create

# Create manager
pveceph mgr create

Configure OSDs

# Create OSD on each storage disk
pveceph osd create /dev/sdb

# Check cluster status
ceph status

Add Ceph Storage

# Create RBD pool
pveceph pool create vm-pool --size 3 --min_size 2

# Add to Proxmox storage
pvesm add rbd ceph-storage --pool vm-pool --content images,rootdir

Storage Performance Optimization

Disk Scheduling

# Set scheduler for SSDs
echo noop > /sys/block/sda/queue/scheduler

# Permanent configuration
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="noop"' > /etc/udev/rules.d/60-ssd-scheduler.rules
# Set scheduler for HDDs
echo deadline > /sys/block/sdb/queue/scheduler

# Permanent configuration
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="deadline"' > /etc/udev/rules.d/60-hdd-scheduler.rules

VM Disk Configuration

Optimize VM disk performance:

High Performance
Balanced
Safety First

Backup Strategies

Proxmox Backup Server (PBS)

PBS provides deduplication, encryption, and incremental backups for optimal storage efficiency.

# Add PBS storage
pvesm add pbs pbs-storage --server pbs.example.com --username backup@pbs --password secret --datastore main

Traditional Backup Methods

# Backup single VM
vzdump 100 --storage local-backup --mode snapshot

# Backup all VMs
vzdump --all --storage nfs-backup --compress gzip

Create backup jobs via web interface:

  1. DatacenterBackup
  2. Add backup job
  3. Configure schedule and retention

Storage Monitoring

Check Storage Usage

# Overall storage usage
df -h

# ZFS pool status
zpool status
zfs list

# LVM information
vgs
lvs
pvs

# Ceph cluster health
ceph df
ceph osd df

Performance Monitoring

# I/O statistics
iostat -x 1

# Real-time disk usage
iotop

# Storage performance testing
fio --name=test --ioengine=libaio --rw=randrw --bs=4k --numjobs=4 --size=1G --runtime=60 --group_reporting

Troubleshooting Common Issues

Storage Full

# Clean old backups
find /var/lib/vz/dump -name "*.vma*" -mtime +30 -delete

# Remove unused disk images
qm rescan --vmid 100

# ZFS cleanup
zfs destroy vm-pool/vm-100-disk-0@snapshot-name

Performance Issues

Always test storage changes in a non-production environment first.

  1. Check disk health: smartctl -a /dev/sda
  2. Monitor I/O wait: top (look for high %wa)
  3. Verify network storage connectivity: ping nfs-server
  4. Check filesystem errors: dmesg | grep -i error

Best Practices

  • Separate storage types: OS, VMs, backups on different storage
  • Regular monitoring: Set up alerts for storage usage
  • Backup testing: Regularly test backup restoration
  • Performance baselines: Establish and monitor performance metrics
  • Capacity planning: Monitor growth trends and plan accordingly

Proper storage configuration significantly impacts overall Proxmox VE performance and reliability.