Overview
This environment consists of three nodes running Proxmox VE 9.1.4 with shared block storage provided by a Synology DS414 over iSCSI. All networking is 1Gb Ethernet with VLAN-based traffic segmentation.
The design goal was not maximum benchmark performance, but:
- Logical traffic isolation
- Shared storage for migration
- Predictable behaviour under load
- Operational simplicity
All nodes and storage are protected by UPS power.
Hardware Platform
Proxmox Nodes
Each node is an HP EliteDesk 800 Mini PC with:
- Mixed RAM capacities
- SATA SSD and/or M.2 NVMe storage (boot and local storage)
- Single 1Gb NIC
- UPS-backed power
These systems are compact, quiet, power-efficient, and widely available. For lab purposes they offer an excellent performance-per-watt ratio.
Storage
- Synology DS414
- iSCSI LUN presented to cluster
- Separate NFS export for backups
- 1Gb Ethernet connectivity
VLAN Design
Network segmentation is implemented as follows:
| VLAN | Purpose | Subnet |
|---|---|---|
| VLAN10 | Proxmox host management | 192.168.10.0/24 |
| VLAN30 | VM and service network | 192.168.30.0/24 |
| VLAN100 | iSCSI storage traffic | Dedicated storage subnet |
| VLAN110 | NFS backup traffic | Dedicated backup subnet |
| VLAN200 | HTTP DMZ | UniFi DMZ for external HTTP web server |
Because each node has only one NIC:
- Storage, management, VM, backup, and DMZ traffic share the same physical link
- Isolation is logical rather than physical
- Total throughput per node is still limited to 1Gb
Despite this limitation, VLAN separation improves:
- Traffic predictability
- Broadcast containment
- Troubleshooting clarity
- Policy enforcement via firewall or switch
Storage Architecture
The Synology presents a shared iSCSI LUN over VLAN100.
On Proxmox:
- iSCSI target is defined
- LVM layered on top
- Container disks reside on shared LVM storage
This enables:
- Live migration between nodes
- Shared disk visibility
- Centralised storage management
The NAS becomes a single storage dependency, but simplifies cluster design significantly compared to distributed storage solutions.
Workload Profile: LXC-Only Deployment
To date, the cluster runs exclusively Linux containers (LXC) rather than full virtual machines. There are currently 18 active containers.
Why LXC Matters Here
Compared to full VMs:
- Containers share the host kernel
- Memory overhead is significantly lower
- Disk usage per workload is reduced
- I/O patterns are typically lighter
- CPU overhead is minimal
As a result:
- 1Gb storage networking is less of a constraint
- iSCSI-backed LVM performs adequately
- Node resource utilisation remains efficient
For infrastructure services (DNS, reverse proxy, monitoring, automation tooling, etc.), containers provide sufficient isolation without the overhead of hardware virtualization.
Why Not Ceph?
Distributed storage (Ceph) was considered but rejected due to:
- 1Gb networking limitations
- Additional complexity
- Storage overhead requirements
- Hardware constraints
Centralised iSCSI is simpler and easier to maintain. The architecture intentionally favours stability and reduced operational overhead.
Backup Design
Backups are written via NFS to the same Synology NAS, but over VLAN110.
Separating backup traffic from iSCSI traffic ensures:
- Backup jobs do not compete with block storage I/O
- Storage operations remain predictable during scheduled backups
- Easier bandwidth analysis if congestion occurs
Although backups and primary storage reside on the same physical NAS, traffic separation reduces contention at the network level.
High Availability for Critical Services
Key services in the cluster are configured for High Availability (HA), ensuring continuity even if a node fails. These services include:
- HTTP web server (VLAN200)
- Mail server
- UniFi Controller
High Availability (HA) is implemented across the Proxmox cluster, independent of VLANs. VLAN200 is used solely for the external web server, while HA ensures that critical services can fail over between nodes regardless of VLAN segmentation.
How HA Works in This Cluster
- Proxmox HA monitors selected containers and automatically restarts them on another node in case of failure
- Shared iSCSI storage ensures container data is available on all nodes
- Quorum is maintained across the three-node cluster to prevent split-brain scenarios
This setup guarantees that critical services remain operational even during hardware outages or maintenance events.
Single NIC Considerations
Each node has only one 1Gb interface.
Implications:
- No physical separation of storage traffic
- No multipath iSCSI
- No NIC-level redundancy
- Maximum aggregate throughput capped at 1Gb per node
However:
- UPS protection reduces abrupt shutdown risk
- The workload profile is moderate
- For lab and infrastructure services, this remains acceptable
This is a deliberate tradeoff between hardware cost and architectural cleanliness.
Performance Observations
With 1Gb networking:
- Maximum theoretical throughput ~125 MB/s
- Practical sustained transfer lower
- Concurrent container disk activity can saturate the link
In practice:
- Infrastructure services perform reliably
- Container migration times are acceptable
- Backups complete within predictable windows
The system is constrained but stable.
Power Protection
All nodes and storage are on UPS power supplies.
This reduces:
- Risk of LUN corruption during outages
- Cluster quorum issues due to sudden node loss
- NAS filesystem inconsistencies
Clean shutdown capability is critical for shared block storage environments.
Network & Storage Diagram
Web Server"] Switch["Switch
1Gb uplink"] VLAN10["VLAN10
Proxmox Management"] VLAN100["VLAN100
iSCSI Storage"] VLAN110["VLAN110
NFS Backup"] Node1["Node1
LXC Containers"] Node2["Node2
LXC Containers"] Node3["Node3
LXC Containers"] NAS_iSCSI["NAS iSCSI LUN"] NAS_NFS["NAS NFS Backup"] UPS["UPS Protected"] %% Connections Internet --> Firewall --> VLAN200 --> Switch Switch --> VLAN10 --> Node1 Switch --> VLAN10 --> Node2 Switch --> VLAN10 --> Node3 Node1 --> VLAN100 --> NAS_iSCSI Node2 --> VLAN100 --> NAS_iSCSI Node3 --> VLAN100 --> NAS_iSCSI Node1 --> VLAN110 --> NAS_NFS Node2 --> VLAN110 --> NAS_NFS Node3 --> VLAN110 --> NAS_NFS Node1 --> UPS Node2 --> UPS Node3 --> UPS NAS_iSCSI --> UPS NAS_NFS --> UPS
Tradeoffs
Advantages
- Clean VLAN segmentation
- Shared storage for migration
- Low power consumption
- Affordable hardware platform
- Operational simplicity
- HA for critical HTTP services
Limitations
- Single storage dependency (NAS)
- 1Gb network bottleneck
- No physical NIC redundancy
- Limited scaling headroom
Future Improvements
Possible future enhancements include:
- 10Gb networking for VLAN100
- Dedicated NIC for storage traffic
- Separate backup target
- SSD-backed LUNs
- Distributed storage if hardware evolves
For current requirements, the system meets its intended purpose: a stable, segmented, shared-storage lab cluster.
Lessons Learned
- LXC containers significantly reduce storage and network load compared to VMs
- VLAN segmentation on a single NIC is effective for isolation even on 1Gb
- UPS protection is critical for NAS-backed iSCSI clusters
- Centralised block storage is simpler and sufficient for home lab workloads
- HA dramatically improves reliability for external HTTP services
This cluster is right-sized for experimentation, lab services, and practical infrastructure learning.