Storage Management
Configure and manage storage in Proxmox VE for optimal performance and reliability
Storage Management
Proxmox VE supports various storage types and configurations. Proper storage management is essential for performance, reliability, and data protection.
Storage Types Overview
Choose storage types based on your performance requirements, budget, and use case scenarios.
Local Storage Types
- Directory: Simple file-based storage on local filesystem
- LVM: Logical Volume Manager for block devices
- LVM-Thin: Thin provisioning with LVM
- ZFS: Advanced filesystem with built-in features
Network Storage Types
- NFS: Network File System for shared storage
- CIFS/SMB: Windows-compatible network shares
- iSCSI: Block-level network storage
- Ceph: Distributed storage cluster
Local Storage Configuration
Directory Storage
- Datacenter → Storage → Add → Directory
- Configure settings:
- ID:
local-backup - Directory:
/mnt/backup - Content: Select appropriate types
- Nodes: Select target nodes
- ID:
Edit /etc/pve/storage.cfg:
dir: local-backup
path /mnt/backup
content backup,iso,vztmpl
nodes proxmox-node1
prune-backups keep-last=3LVM Configuration
Create LVM storage for high-performance VM disks:
# Create physical volume
pvcreate /dev/sdb
# Create volume group
vgcreate vm-storage /dev/sdb
# Add to Proxmox storage configuration
pvesm add lvm vm-storage --vgname vm-storage --content imagesZFS Configuration
ZFS requires adequate RAM (minimum 8GB recommended) and benefits from SSD caching for optimal performance.
# Create ZFS pool
zpool create -f vm-pool /dev/sdb
# Add to Proxmox
pvesm add zfspool vm-pool --pool vm-pool --content images,rootdir# Create mirrored ZFS pool
zpool create -f vm-pool mirror /dev/sdb /dev/sdc
# Configure compression and deduplication
zfs set compression=lz4 vm-pool
zfs set dedup=on vm-pool# Create RAIDZ pool (minimum 3 disks)
zpool create -f vm-pool raidz /dev/sdb /dev/sdc /dev/sdd
# Optimize for VM workloads
zfs set recordsize=64K vm-pool
zfs set primarycache=metadata vm-poolNetwork Storage Configuration
NFS Storage
On NFS server:
# Install NFS server
apt update && apt install nfs-kernel-server
# Create export directory
mkdir -p /srv/nfs/proxmox
# Configure exports
echo '/srv/nfs/proxmox 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)' >> /etc/exports
# Apply configuration
exportfs -ra
systemctl restart nfs-kernel-server# Add NFS storage to Proxmox
pvesm add nfs nfs-storage --server 192.168.1.200 --export /srv/nfs/proxmox --content backup,iso,vztmpl,imagesOr via web interface:
- Datacenter → Storage → Add → NFS
- Configure NFS server details
Ceph Configuration
Ceph provides distributed storage with high availability and scalability, ideal for cluster environments.
Install Ceph
# Install Ceph packages
pveceph install --version quincy
# Initialize Ceph cluster
pveceph init --network 192.168.1.0/24
# Create monitors (on each node)
pveceph mon create
# Create manager
pveceph mgr createConfigure OSDs
# Create OSD on each storage disk
pveceph osd create /dev/sdb
# Check cluster status
ceph statusAdd Ceph Storage
# Create RBD pool
pveceph pool create vm-pool --size 3 --min_size 2
# Add to Proxmox storage
pvesm add rbd ceph-storage --pool vm-pool --content images,rootdirStorage Performance Optimization
Disk Scheduling
# Set scheduler for SSDs
echo noop > /sys/block/sda/queue/scheduler
# Permanent configuration
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="noop"' > /etc/udev/rules.d/60-ssd-scheduler.rules# Set scheduler for HDDs
echo deadline > /sys/block/sdb/queue/scheduler
# Permanent configuration
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="deadline"' > /etc/udev/rules.d/60-hdd-scheduler.rulesVM Disk Configuration
Optimize VM disk performance:
Backup Strategies
Proxmox Backup Server (PBS)
PBS provides deduplication, encryption, and incremental backups for optimal storage efficiency.
# Add PBS storage
pvesm add pbs pbs-storage --server pbs.example.com --username backup@pbs --password secret --datastore mainTraditional Backup Methods
# Backup single VM
vzdump 100 --storage local-backup --mode snapshot
# Backup all VMs
vzdump --all --storage nfs-backup --compress gzipCreate backup jobs via web interface:
- Datacenter → Backup
- Add backup job
- Configure schedule and retention
Storage Monitoring
Check Storage Usage
# Overall storage usage
df -h
# ZFS pool status
zpool status
zfs list
# LVM information
vgs
lvs
pvs
# Ceph cluster health
ceph df
ceph osd dfPerformance Monitoring
# I/O statistics
iostat -x 1
# Real-time disk usage
iotop
# Storage performance testing
fio --name=test --ioengine=libaio --rw=randrw --bs=4k --numjobs=4 --size=1G --runtime=60 --group_reportingTroubleshooting Common Issues
Storage Full
# Clean old backups
find /var/lib/vz/dump -name "*.vma*" -mtime +30 -delete
# Remove unused disk images
qm rescan --vmid 100
# ZFS cleanup
zfs destroy vm-pool/vm-100-disk-0@snapshot-namePerformance Issues
Always test storage changes in a non-production environment first.
- Check disk health:
smartctl -a /dev/sda - Monitor I/O wait:
top(look for high %wa) - Verify network storage connectivity:
ping nfs-server - Check filesystem errors:
dmesg | grep -i error
Best Practices
- Separate storage types: OS, VMs, backups on different storage
- Regular monitoring: Set up alerts for storage usage
- Backup testing: Regularly test backup restoration
- Performance baselines: Establish and monitor performance metrics
- Capacity planning: Monitor growth trends and plan accordingly
Proper storage configuration significantly impacts overall Proxmox VE performance and reliability.